Skip to content

mamba: avoid redundant HBM reloads in causal_conv1d_update shift loop#4460

Open
wdykas wants to merge 4 commits intoNVIDIA:mainfrom
wdykas:fix/causal-conv1d-shift-loop-hbm-reads
Open

mamba: avoid redundant HBM reloads in causal_conv1d_update shift loop#4460
wdykas wants to merge 4 commits intoNVIDIA:mainfrom
wdykas:fix/causal-conv1d-shift-loop-hbm-reads

Conversation

@wdykas
Copy link
Copy Markdown
Contributor

@wdykas wdykas commented Apr 24, 2026

The original shift loop re-reads conv_state[1..WIDTH-1] from HBM on every sequence step, even though those same values are already in registers as x_val_0/1/2 from the earlier load. When state_len == WIDTH (the common Mamba configuration where the conv state depth equals the kernel width), skip the re-reads and store from the existing registers. The HAS_INT_STATE snapshot path benefits from the same reuse. state_len > WIDTH falls through to the original loop.

Numerically bit-exact on conv_state; measured ~1.5% decode throughput improvement on nano-v3 at BS=1, OSL=256 (p50 245.21 -> 248.79 tok/s).

What does this PR do ?

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Issue tracking

For PRs from open-source community contributors:

  • New features: a linked issue is required. Please open a feature request and reference it here before submitting the PR.
  • Small updates (bug fixes, minor improvements): a linked issue is recommended and will accelerate the PR review process.

Linked issue:

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

The original shift loop re-reads conv_state[1..WIDTH-1] from HBM on every
sequence step, even though those same values are already in registers as
x_val_0/1/2 from the earlier load. When state_len == WIDTH (the common Mamba
configuration where the conv state depth equals the kernel width), skip the
re-reads and store from the existing registers. The HAS_INT_STATE snapshot
path benefits from the same reuse. state_len > WIDTH falls through to the
original loop.

Numerically bit-exact on conv_state; measured ~1.5% decode throughput
improvement on nano-v3 at BS=1, OSL=256 (p50 245.21 -> 248.79 tok/s).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@wdykas wdykas requested review from a team as code owners April 24, 2026 16:32
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 24, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft April 24, 2026 16:33
@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 24, 2026

/ok to test 31fa060

@github-actions
Copy link
Copy Markdown
Contributor

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Apr 24, 2026
@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 24, 2026

/ok to test e5c3fdc

@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 27, 2026

/claude review

Comment thread megatron/core/ssm/ops/causal_conv1d_triton.py
William Dykas added 2 commits April 28, 2026 06:58
…dykas/Megatron-LM into fix/causal-conv1d-shift-loop-hbm-reads
@wdykas wdykas marked this pull request as ready for review April 28, 2026 13:59
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team April 28, 2026 13:59
@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 28, 2026

/ok to test 48afcad

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants