Skip to content

Standardize misc graph interface#4485

Merged
tdene merged 11 commits intoNVIDIA:mainfrom
tdene:tde/refactor_mtp_graphs
Apr 29, 2026
Merged

Standardize misc graph interface#4485
tdene merged 11 commits intoNVIDIA:mainfrom
tdene:tde/refactor_mtp_graphs

Conversation

@tdene
Copy link
Copy Markdown
Contributor

@tdene tdene commented Apr 27, 2026

What does this PR do ?

#4425 added syntactic sugar to CudaGraphManager to make it easy to graph arbitrary, non-layer, torch ops which may be keyed on things other than padded_batch_dimensions. This common sugar will be shared by several pieces of upcoming code.

In parallel, #4260 added similar syntactic sugar, but special-cased specifically to MTP CUDA Graphs. The implementation is very similar to that of #4425.

This PR subsumes MTP CUDA graphs into the same sugar as #4425, reconciling #4260's implementation with that of other planned work.

Issue tracking

For PRs from open-source community contributors:

  • New features: a linked issue is required. Please open a feature request and reference it here before submitting the PR.
  • Small updates (bug fixes, minor improvements): a linked issue is recommended and will accelerate the PR review process.

Linked issue:

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 27, 2026

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@tdene tdene marked this pull request as ready for review April 27, 2026 22:27
@tdene tdene requested review from a team as code owners April 27, 2026 22:27
@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team April 27, 2026 22:28
@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Apr 27, 2026
@tdene
Copy link
Copy Markdown
Contributor Author

tdene commented Apr 27, 2026

/claude review

Copy link
Copy Markdown
Contributor

@claude claude Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 28, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Comment thread megatron/core/models/common/language_module/language_module.py Outdated
Comment thread megatron/core/models/common/language_module/language_module.py
Comment thread megatron/core/inference/engines/dynamic_engine.py Outdated
@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Final Review PR is in the "final review" stage label Apr 28, 2026
@tdene tdene force-pushed the tde/refactor_mtp_graphs branch from e7b3abe to 8bf775d Compare April 28, 2026 19:58
Comment on lines 1103 to 1161
@@ -1126,11 +1144,18 @@ def teardown_method(self, method):
CudaGraphManager.global_mempool = None
Utils.destroy_model_parallel()

@pytest.mark.parametrize(
"make_module",
[
pytest.param(_make_simple_module, id="nn_module"),
pytest.param(_make_simple_non_module, id="plain_class"),
],
)
@torch.inference_mode()
def test_inline_capture_matches_eager(self):
def test_inline_capture_matches_eager(self, make_module):
"""Inline-captured graph output must match eager execution."""
config = self._make_config()
module = _SimpleModule(config).cuda().eval()
module = make_module(config)

# Get eager reference before wrapping
x = torch.randn(4, config.hidden_size, device="cuda")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use the existing model building logic for this? Seems more rigorous than a dummy model.

@tdene tdene enabled auto-merge April 29, 2026 16:51
@svcnvidia-nemo-ci svcnvidia-nemo-ci added Approved All necessary approvals have been made and removed Final Review PR is in the "final review" stage labels Apr 29, 2026
@tdene tdene added this pull request to the merge queue Apr 29, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/25129217585

Merged via the queue into NVIDIA:main with commit 3f59bbb Apr 29, 2026
71 of 73 checks passed
@tdene tdene deleted the tde/refactor_mtp_graphs branch April 29, 2026 20:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Approved All necessary approvals have been made complexity: medium

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants