Skip to content

[megatron] fix: add missing FP8 padding for router replay#5989

Open
eternally-z wants to merge 1 commit intoverl-project:mainfrom
meituan-search:fp8_router_replay_fix
Open

[megatron] fix: add missing FP8 padding for router replay#5989
eternally-z wants to merge 1 commit intoverl-project:mainfrom
meituan-search:fp8_router_replay_fix

Conversation

@eternally-z
Copy link
Copy Markdown
Contributor

What does this PR do?

The router replay path lacks FP8 padding logic. Consequently, enabling router replay during FP8 training leads to incorrect training results. This PR adds the missing FP8 padding support.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward, fully_async, one_step_off
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request integrates FP8 padding support into the router replay utilities by passing a use_fp8_padding flag to the preprocessing functions. Feedback identifies a potential AttributeError when accessing tf_config.fp8 directly and suggests using getattr for safety. Additionally, it is recommended to set pre_process=False in merge_router_topk_indices to avoid unnecessary memory allocation and computation for tensors that are not utilized.

Comment on lines +254 to +269
fp8 = tf_config.fp8
use_fp8_padding = fp8 in ["e4m3", "hybrid"]

if input_ids.is_nested:
batch_size = input_ids.shape[0]
_, packed_seq_params, _ = preprocess_thd_engine(input_ids, pre_process=True)
_, packed_seq_params, _ = preprocess_thd_engine(
input_ids, pre_process=True, use_fp8_padding=use_fp8_padding
)
layers_topk_idx = postprocess_thd_engine(
layers_topk_idx, packed_seq_params, input_ids, batch_size, post_process=True
)
else:
batch_size, seq_len = attention_mask.shape[:2]
_, packed_seq_params = preprocess_packed_seqs(input_ids, attention_mask, pre_process=True)
_, packed_seq_params = preprocess_packed_seqs(
input_ids, attention_mask, pre_process=True, use_fp8_padding=use_fp8_padding
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Accessing tf_config.fp8 directly can lead to an AttributeError if the attribute is missing in certain Megatron versions or custom configurations. It is safer to use getattr(tf_config, 'fp8', None).

Additionally, in merge_router_topk_indices, the calls to preprocess_thd_engine and preprocess_packed_seqs only require the packed_seq_params for the subsequent post-processing step. Setting pre_process=False avoids redundant memory allocation and computation for the processed tensor which is currently discarded.

        use_fp8_padding = getattr(tf_config, 'fp8', None) in ["e4m3", "hybrid"]

        if input_ids.is_nested:
            batch_size = input_ids.shape[0]
            _, packed_seq_params, _ = preprocess_thd_engine(
                input_ids, pre_process=False, use_fp8_padding=use_fp8_padding
            )
            layers_topk_idx = postprocess_thd_engine(
                layers_topk_idx, packed_seq_params, input_ids, batch_size, post_process=True
            )
        else:
            batch_size, seq_len = attention_mask.shape[:2]
            _, packed_seq_params = preprocess_packed_seqs(
                input_ids, attention_mask, pre_process=False, use_fp8_padding=use_fp8_padding
            )

Comment on lines +296 to +297
fp8 = tf_config.fp8
use_fp8_padding = fp8 in ["e4m3", "hybrid"]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Use getattr(tf_config, 'fp8', None) to safely check for FP8 configuration and avoid potential AttributeError if the attribute is not present in the configuration object.

Suggested change
fp8 = tf_config.fp8
use_fp8_padding = fp8 in ["e4m3", "hybrid"]
use_fp8_padding = getattr(tf_config, 'fp8', None) in ["e4m3", "hybrid"]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants