Skip to content

Conversation

@CjangCjengh
Copy link

@CjangCjengh CjangCjengh commented Dec 18, 2025

Description

This PR introduces a new feature that allows users to control which assistant turns should participate in loss calculation by providing a loss_mask field in the dataset. This is useful for scenarios where certain responses in a multi-turn conversation should be excluded from training (e.g., low-quality responses or context-only turns).

Key Changes

  • src/llamafactory/data/converter.py:
    • Added _extract_loss_mask method to DatasetConverter to parse and validate the loss_mask field.
    • Updated AlpacaDatasetConverter, SharegptDatasetConverter, and OpenAIDatasetConverter to extract and pass _loss_mask.
  • src/llamafactory/data/processor/supervised.py:
    • Updated SupervisedDatasetProcessor (and PackedSupervisedDatasetProcessor) to accept loss_mask.
    • Implemented logic to mask labels (set to IGNORE_INDEX) for assistant turns where the corresponding loss_mask value is 0 (or False).

Usage

To use this feature, add a loss_mask field to your dataset entry. The loss_mask must be a list with a length equal to the total number of messages (user prompts + assistant responses).

Example (ShareGPT format)

{
  "conversations": [
    {"from": "human", "value": "Question 1"},
    {"from": "gpt", "value": "Answer 1 (Ignore)"},
    {"from": "human", "value": "Question 2"},
    {"from": "gpt", "value": "Answer 2 (Train)"}
  ],
  "loss_mask": [0, 0, 0, 1]
}

In this example:

  • The 1st message (Human) is masked by default (user turns are never trained).
  • The 2nd message (GPT) has mask 0, so its labels will be set to IGNORE_INDEX and it will not contribute to the loss.
  • The 3rd message (Human) is masked.
  • The 4th message (GPT) has mask 1, so it will be used for training.

Before submitting

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @CjangCjengh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the data processing pipeline by introducing a loss_mask mechanism, allowing users to precisely control which parts of multi-turn conversations contribute to the training loss. This feature is particularly useful for scenarios where certain assistant responses, such as low-quality or context-only turns, need to be excluded from the model's learning objective, thereby improving training efficiency and model quality.

Highlights

  • Dataset Loss Masking: Introduced a loss_mask field in datasets to allow fine-grained control over which assistant turns contribute to the loss calculation during training.
  • Data Converter Integration: Modified existing dataset converters (Alpaca, ShareGPT, OpenAI) to extract and validate the new loss_mask field from dataset examples, ensuring proper format and length.
  • Loss Calculation Control: Implemented logic within the supervised dataset processor to set labels to IGNORE_INDEX for assistant turns where the loss_mask indicates they should be excluded from loss, effectively skipping their contribution to training.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a useful loss_mask feature to selectively control loss calculation for different turns in a conversation. The implementation is mostly solid, with changes to data converters to extract the mask and to the supervised processor to apply it. I've found a critical bug in the supervised.py processor where the loss_mask is not handled correctly when mask_history is enabled, leading to masks being applied to the wrong turns. I've provided a detailed comment with a suggested fix that also refactors the code for better clarity and correctness. Once this issue is addressed, the PR should be in good shape.

Copy link
Owner

@hiyouga hiyouga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@hiyouga
Copy link
Owner

hiyouga commented Dec 18, 2025

Could you please resolve the failed tests?

images=examples["_images"][i] or [],
videos=examples["_videos"][i] or [],
audios=examples["_audios"][i] or [],
loss_mask=example_loss_mask,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we simply use examples["_loss_mask"][i] or []

batch_input_ids, batch_labels, batch_images, batch_videos, batch_audios = [], [], [], [], []
lengths = []
length2indexes = defaultdict(list)
loss_masks = examples.get("_loss_mask")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the _loss_mask is always in examples

for mask_value, message in zip(loss_mask, prompt + response):
if message.get("role") == Role.ASSISTANT.value:
assistant_loss_mask.append(1 if mask_value else 0)
if len(assistant_loss_mask) != len(encoded_pairs):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this check is redundant


assistant_loss_mask: Optional[list[int]] = None
if loss_mask is not None:
if len(loss_mask) != len(prompt) + len(response):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this check is already executed in converter

images: list["ImageInput"],
videos: list["VideoInput"],
audios: list["AudioInput"],
loss_mask: Optional[list[int]] = None,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

list[int]

target_label = target_ids

if assistant_loss_mask is not None and turn_idx < len(assistant_loss_mask):
if assistant_loss_mask[turn_idx] == 0:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we merge L97 and L98?


if assistant_loss_mask is not None and turn_idx < len(assistant_loss_mask):
if assistant_loss_mask[turn_idx] == 0:
target_label = [IGNORE_INDEX] * target_len
Copy link

@zengxingchen zengxingchen Jan 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hiyouga @CjangCjengh
Hi! I just read through this PR and noticed a potential issue when mask_history=True and loss_mask is also used. In that case, mask_history sets IGNORE_INDEX before loss_mask is applied, so the loss mask may not take effect as intended.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my opinion, when loss_mask is used, mask_history shouldn’t be responsible for setting IGNORE_INDEX. The main thing we still need from mask_history is reversing the IDs to avoid truncating the last turn. That said, I’m not sure whether this default behavior should also apply to loss_mask, since loss_mask is a fairly flexible option with different possible use cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

solved This problem has been already solved

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants