Skip to content

feat(attention): Add rotary_base_per_layer for Step-3.5-Flash#4473

Open
shifangx wants to merge 7 commits intoNVIDIA:devfrom
shifangx:shifang/attention-for-step-3.5-flash
Open

feat(attention): Add rotary_base_per_layer for Step-3.5-Flash#4473
shifangx wants to merge 7 commits intoNVIDIA:devfrom
shifangx:shifang/attention-for-step-3.5-flash

Conversation

@shifangx
Copy link
Copy Markdown
Contributor

@shifangx shifangx commented Apr 26, 2026

What does this PR do ?

feat(attention): Add rotary_base_per_layer for Step-3.5-Flash

  • rotary_base_per_layer: Optional[List[float]] -- per-layer RoPE theta values.
    When set, each SelfAttention creates its own RotaryEmbedding
    There are quite a few places that assume the GPT model includes RotaryEmbedding by default, so this PR needs to keep RotaryEmbedding within the GPT model.

The new feature default to False/None and have no effect on existing models.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Issue tracking

For PRs from open-source community contributors:

  • New features: a linked issue is required. Please open a feature request and reference it here before submitting the PR.
  • Small updates (bug fixes, minor improvements): a linked issue is recommended and will accelerate the PR review process.

Linked issue:

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either [email protected] or [email protected].

@shifangx shifangx requested review from a team as code owners April 26, 2026 00:06
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 26, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@shifangx shifangx force-pushed the shifang/attention-for-step-3.5-flash branch 4 times, most recently from 3d6565f to 422246b Compare April 26, 2026 15:17
…r for Step-3.5-Flash

Adds two new optional, off-by-default features to TransformerConfig and
SelfAttention to faithfully represent the Step-3.5-Flash architecture.

- attention_per_head_gate: adds a separate ColumnParallelLinear(hidden_size
  -> num_attention_heads) whose sigmoid output gates each head independently
  (Step-3.5-Flash g_proj). Applied after core attention, before linear_proj.

- rotary_base_per_layer: Optional[List[float]] -- per-layer RoPE theta values.
  When set, each SelfAttention creates its own RotaryEmbedding; the shared
  model-level rotary_pos_emb in GPTModel is not created.

Both features default to False/None and have no effect on existing models.

Co-Authored-By: Claude Sonnet 4.6 <[email protected]>
@shifangx shifangx force-pushed the shifang/attention-for-step-3.5-flash branch from 422246b to 6f80c19 Compare April 26, 2026 15:29
Copy link
Copy Markdown
Contributor

@Victarry Victarry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR adds two opt-in features (use_head_wise_attn_gate + rotary_base_per_layer) for Step-3.5-Flash; defaults are off, existing models unaffected.

Main suggestions (see inline comments for details):

  1. [CRITICAL] Fold use_head_wise_attn_gate into linear_qkv and merge it with the existing attention_output_gate path. Share _split_qkv / _apply_output_gate (broadcasting handles both gate shapes — no new branch needed). Benefits:

    • Eliminates the TE/local backend divergence in g_proj math (submodules.linear_qkv resolves to TELayerNormColumnParallelLinear under TE, which adds an independent learnable LayerNorm).
    • Avoids two near-parallel "output gating" implementations coexisting long-term and drifting.
    • Saves one GEMM kernel launch.
    • Makes the two gate flags naturally mutually exclusive.
  2. [IMPORTANT] The rotary_base_per_layer forward override depends on the model-level rotary_pos_emb already existing, contradicting the PR description's claim that "model-level rotary_pos_emb is not created" — this PR does not modify GPTModel, so both rotaries actually coexist.

  3. [IMPORTANT] _build_per_layer_rotary_pos_emb duplicates the rotary-construction logic from gpt_model.py; recommend extracting a shared factory function.

  4. [IMPORTANT] No test coverage at all for rotary_base_per_layer.

5/6. [SUGGESTION] assert False, "Invalid position embedding type" is uninformative; getattr(self.config, 'rotary_base_per_layer', None) is unnecessary (the field is added by this PR).

Comment thread megatron/core/transformer/attention.py Outdated
Comment thread megatron/core/transformer/attention.py
Comment thread megatron/core/transformer/attention.py Outdated
Comment thread tests/unit_tests/transformer/test_head_wise_attn_gate.py Outdated
Comment thread megatron/core/transformer/attention.py Outdated
Comment thread megatron/core/transformer/attention.py
@shifangx
Copy link
Copy Markdown
Contributor Author

/ok to test 6f80c19

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 29, 2026

/ok to test 6f80c19

@shifangx, there was an error processing your request: E2

See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/2/

@shifangx shifangx force-pushed the shifang/attention-for-step-3.5-flash branch from f728d71 to 9bd8555 Compare April 29, 2026 12:35
@shifangx shifangx force-pushed the shifang/attention-for-step-3.5-flash branch from 9bd8555 to 6ad0ceb Compare April 29, 2026 12:38
@shifangx
Copy link
Copy Markdown
Contributor Author

/ok to test 7485da4

@shifangx shifangx force-pushed the shifang/attention-for-step-3.5-flash branch from c943f32 to c75616a Compare April 30, 2026 02:46
@shifangx
Copy link
Copy Markdown
Contributor Author

/ok to test c75616a

@shifangx shifangx force-pushed the shifang/attention-for-step-3.5-flash branch from c75616a to 5671195 Compare April 30, 2026 03:06
@shifangx
Copy link
Copy Markdown
Contributor Author

/ok to test 5671195

@shifangx shifangx force-pushed the shifang/attention-for-step-3.5-flash branch from c250903 to bea28f6 Compare April 30, 2026 05:34
@shifangx
Copy link
Copy Markdown
Contributor Author

/ok to test bea28f6

@shifangx shifangx changed the title feat(attention): Add attention_per_head_gate and rotary_base_per_laye… feat(attention): Add rotary_base_per_layer for Step-3.5-Flash Apr 30, 2026
@Victarry
Copy link
Copy Markdown
Contributor

Please also submit a PR to the main branch, Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants