Skip to content

Conversation

yuxianq
Copy link
Collaborator

@yuxianq yuxianq commented Sep 19, 2025

Summary by CodeRabbit

  • New Features
    • Automatic post-load weight processing across models, enabling FP8 scaling and MoE quant scale setup after loading.
    • Engine now auto-detects and runs post-load steps where supported for smoother model initialization.
  • Refactor
    • Replaced hook-based closures with explicit post-load methods on relevant modules and models.
    • Centralized invocation of post-load steps in the model loading flow.
    • Removed a constructor parameter related to post-load hooks in min-latency components; post-load behavior is now implicit.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@yuxianq yuxianq requested review from a team as code owners September 19, 2025 13:24
@yuxianq yuxianq changed the title Fix dummy load format for DeepSeek. [None][fix] Fix dummy load format for DeepSeek. Sep 19, 2025
@yuxianq
Copy link
Collaborator Author

yuxianq commented Sep 19, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19360 [ run ] triggered by Bot

Copy link
Contributor

coderabbitai bot commented Sep 19, 2025

📝 Walkthrough

Walkthrough

Introduces a standardized post-load hook, post_load_weights, across modules. Model engine now invokes this hook on all modules after load_weights. DeepseekV3 and min-latency Llama move prior post-load logic into class methods. Linear and MoE stacks add hook entry points and quantization-specific implementations, including FP8/MoE scale handling and layernorm rebinding.

Changes

Cohort / File(s) Summary of Changes
Model engine hook invocation
tensorrt_llm/_torch/pyexecutor/model_engine.py
After weight loading, iterates modules and calls post_load_weights when present, before finalizing MoE load balancer.
DeepseekV3 post-load processing
tensorrt_llm/_torch/models/modeling_deepseekv3.py
Added post_load_weights in DeepseekV3WeightLoader and DeepseekV3ForCausalLM; performs FP8 block-scale resmoothing/layout transforms when applicable and rebinds next_layer_layernorm across layers.
Llama min-latency API shift to methods
tensorrt_llm/_torch/models/modeling_llama_min_latency.py
Removed post_load_weights_hook parameter from constructors and its usage; added post_load_weights methods to Llama4MinLatencyGatedMLP, Llama4MinLatencyFusedMoE, Llama4MinLatencyMoE; moved FP8 QDQ/min-latency scale logic into these methods; removed functools.partial import.
Linear module hook
tensorrt_llm/_torch/modules/linear.py
Added LinearMethodBase.post_load_weights (no-op default) and Linear.post_load_weights delegating to quant_method.
MoE interface hook
tensorrt_llm/_torch/modules/fused_moe/interface.py
Added MoE.post_load_weights as a no-op extension point.
Fused MoE implementations: hook delegation
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py, .../fused_moe_triton.py, .../fused_moe_trtllm_gen.py, .../fused_moe_wide_ep.py
Added post_load_weights delegating to quant_method.post_load_weights(self); no other flow changes.
Fused MoE quantization methods
tensorrt_llm/_torch/modules/fused_moe/quantization.py
Added FusedMoEMethodBase.post_load_weights (no-op) and MXFP4WeightTRTLLMGenFusedMoEMethod.post_load_weights (no-op); implemented DeepSeekFP8BlockScalesFusedMoEMethodDeepGemm.post_load_weights to transform/apply quant scales and refresh via setup_quant_scales on SM100f.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant U as Loader
  participant ME as ModelEngine
  participant M as Model
  participant Mod as Module (various)
  participant QM as QuantMethod

  U->>ME: load(model, weights, format)
  ME->>M: load_weights(...)
  M-->>ME: weights loaded
  rect rgba(233,246,255,0.6)
    note over ME: Post-load phase
    ME->>M: modules()
    loop for each module
      ME->>Mod: hasattr(post_load_weights)? call
      alt Module defines hook
        Mod->>Mod: post_load_weights()
        opt Module delegates to quant
          Mod->>QM: post_load_weights(module)
          QM-->>Mod: apply scales/layout updates
        end
      else
        ME-->>Mod: skip
      end
    end
  end
  ME-->>U: load complete
Loading
sequenceDiagram
  autonumber
  participant G as Llama4MinLatencyGatedMLP
  participant L as Llama4MinLatencyLinear
  participant Q as Quant (FP8/QDQ)
  participant E as Fused/MoE Module
  participant QE as MoE QuantMethod

  rect rgba(242,255,233,0.6)
    note over G,L: New method-based flow
    G->>G: post_load_weights()
    G->>L: set inv_output_scale / trtllm_gen_global_scale
    L-->>G: updated scales
    E->>E: post_load_weights()
    E->>QE: post_load_weights(E)
    QE-->>E: compute min_latency_quant_scales / transform layouts
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Pre-merge checks and finishing touches

❌ Failed checks (3 warnings)
Check name Status Explanation Resolution
Title Check ⚠️ Warning The title "[None][fix] Fix dummy load format for DeepSeek." is misleading given the provided changeset. The raw_summary shows the primary work is adding explicit post_load_weights hooks across many modules, moving FP8/block-scale post-load logic into those methods (including DeepseekV3), and removing the old post_load_weights_hook parameter in min-latency classes, with no evidence of a "dummy load format" fix. Therefore the title does not accurately summarize the primary changes and may confuse reviewers. Please update the PR title to reflect the actual intent, for example "[None][fix] Add post_load_weights hooks and FP8 post-load handling for DeepSeek and MoE" or "[TRTLLM-XXXX][refactor] Move post-load weight processing into post_load_weights() methods across models"; if a separate dummy-load-format fix exists, mention it in the description and include both in the title. Keep the title short and focused on the primary change so reviewers can quickly understand the purpose.
Description Check ⚠️ Warning The PR description is essentially the unfilled repository template and a CodeRabbit AI placeholder and lacks any substantive explanation, test coverage notes, or validation steps. Required sections such as Description and Test Coverage are empty and do not describe the rationale, files changed, or how to verify the changes. As a result the description is largely incomplete and insufficient for a proper review. Replace the template placeholders with a clear Description summarizing what was changed and why, list the key files and behavioral impact, and add a Test Coverage section with unit/integration tests or manual validation steps; explicitly call out public API changes (e.g., removed post_load_weights_hook, added post_load_weights methods) and any migration notes. Include the related JIRA/issue reference or mark [None], and note any CI expectations or CODEOWNERS updates required for approval.
Docstring Coverage ⚠️ Warning Docstring coverage is 5.41% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (8)
tensorrt_llm/_torch/models/modeling_deepseekv3.py (1)

1530-1559: Consider extracting the layernorm rebinding logic into a separate method.

The post_load_weights method handles two distinct concerns: FP8 block-scale transformations and layernorm rebinding. For better maintainability and testability, consider splitting these into separate helper methods.

 def post_load_weights(self):
+    self._apply_fp8_block_scale_transformations()
+    self._rebind_layernorms()
+
+def _apply_fp8_block_scale_transformations(self):
     all_named_modules = dict(self.model.named_modules())
     for name, module in tqdm(all_named_modules.items(),
                              desc="Loading weights"):
         if len(module._parameters) <= 0 or name.startswith("draft_model"):
             continue
-        else:
-            if self.model_config.quant_config.layer_quant_mode.has_fp8_block_scales(
-            ) and is_sm_100f() and hasattr(module, "weight_scale"):
-                weight, weight_scale = resmooth_to_fp8_e8m0(
-                    module.weight, module.weight_scale)
-                transfromed_scale = transform_sf_into_required_layout(
-                    weight_scale,
-                    mn=weight.shape[0],
-                    k=weight.shape[1],
-                    recipe=(1, 128, 128),
-                    is_sfa=False)
-                module.weight = nn.Parameter(weight, requires_grad=False)
-                module.weight_scale = nn.Parameter(transfromed_scale,
-                                                   requires_grad=False)
+        if self.model_config.quant_config.layer_quant_mode.has_fp8_block_scales(
+        ) and is_sm_100f() and hasattr(module, "weight_scale"):
+            weight, weight_scale = resmooth_to_fp8_e8m0(
+                module.weight, module.weight_scale)
+            transfromed_scale = transform_sf_into_required_layout(
+                weight_scale,
+                mn=weight.shape[0],
+                k=weight.shape[1],
+                recipe=(1, 128, 128),
+                is_sfa=False)
+            module.weight = nn.Parameter(weight, requires_grad=False)
+            module.weight_scale = nn.Parameter(transfromed_scale,
+                                               requires_grad=False)
 
+def _rebind_layernorms(self):
     if not self.is_draft_model:
         for idx, layer in enumerate(
                 self.model.model.layers[:self.config.num_hidden_layers]):
             if idx == self.config.num_hidden_layers - 1:
                 layer.next_layer_layernorm = self.model.model.norm
             else:
                 layer.next_layer_layernorm = self.model.model.layers[
                     idx + 1].input_layernorm
tensorrt_llm/_torch/models/modeling_llama_min_latency.py (1)

567-582: Consider adding error handling for missing scales.

The method assumes shared_expert.gate_up_proj.input_scale exists when min-latency mode is enabled. Consider adding validation to provide better error messages.

 def post_load_weights(self):
     # Set min-latency quant scales for routed experts if we plan to use min-latency MoE kernels.
     # This is because the routed experts' input scale is after the score multiplication, so we must use the
     # pre-score scaling input scale, which happens to be shared expert's input scale.
     if self.experts.enable_min_latency_fused_moe and hasattr(
             self.shared_expert.gate_up_proj, "input_scale"):
+        if not hasattr(self.shared_expert.gate_up_proj, "input_scale"):
+            raise AttributeError(
+                "Min-latency MoE kernels are enabled but shared_expert.gate_up_proj.input_scale is missing. "
+                "This typically indicates a quantization configuration mismatch.")
         pre_score_scaling_input_scale = self.shared_expert.gate_up_proj.input_scale
         self.experts.min_latency_quant_scales = FusedMoEQuantScalesFP8(
             fc1_dequant=self.experts.fc31_dequant.data /
             self.experts.fc31_input_dequant.data *
             pre_score_scaling_input_scale,
             fc2_quant=self.experts.fc2_quant,
             fc2_dequant=self.experts.fc2_dequant,
             fc1_input_dequant=pre_score_scaling_input_scale,
         )
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)

1049-1052: Harden post-load hook invocation (callable check + error context).

Guard against non-callable attributes and surface module context on failures.

-            for module in model.modules():
-                if hasattr(module, 'post_load_weights'):
-                    module.post_load_weights()
+            for module in model.modules():
+                fn = getattr(module, 'post_load_weights', None)
+                if callable(fn):
+                    try:
+                        fn()
+                    except Exception as e:
+                        logger.error("post_load_weights failed for %s: %r",
+                                     module.__class__.__name__, e)
+                        raise
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)

198-200: Clarify optional hook intent with a brief docstring.

Minor readability improvement; keeps it non-abstract by design.

-    def post_load_weights(self):
-        pass
+    def post_load_weights(self):
+        """Optional hook called after load_weights to finalize module-specific state."""
+        pass
tensorrt_llm/_torch/modules/linear.py (1)

244-246: Silence Ruff B027 for intentional no-op hook.

Keep default no-op but avoid the linter warning.

-    def post_load_weights(self, module: Linear):
-        pass
+    def post_load_weights(self, module: Linear):
+        return None  # noqa: B027 - optional hook with default no-op
tensorrt_llm/_torch/modules/fused_moe/quantization.py (3)

325-327: Add @abstractmethod decorator or provide documentation for the base implementation.

This method is part of an abstract base class (FusedMoEMethodBase) but lacks the @abstractmethod decorator. Since most subclasses don't need to override this method (it's a no-op by default), the current implementation is acceptable. However, consider adding a docstring to clarify that this is an optional hook for subclasses.

 def post_load_weights(self, module: torch.nn.Module):
+    """
+    Optional hook for post-processing after weights are loaded.
+    Subclasses can override this to perform additional setup.
+    """
     pass

732-752: Consider error handling for is_sm_100f() device compatibility.

The post_load_weights method in DeepSeekFP8BlockScalesFusedMoEMethodDeepGemm performs critical scale transformations when is_sm_100f() is true, but there's no validation that the required operations are available on the current device.

Consider adding a check to ensure the device supports the required operations:

 def post_load_weights(self, module: torch.nn.Module):
     if is_sm_100f():
+        if not hasattr(module, 'quant_scales') or len(module.quant_scales) < 2:
+            trtllm_logger.logger.warning(
+                "Expected quant_scales not found for SM100f post-load processing"
+            )
+            return
         transfromed_w3_w1_scale = transform_sf_into_required_layout(

734-752: Register non-trainable scaling factors as buffers (not nn.Parameter)

Direct nn.Parameter assignments are present here and elsewhere (tensorrt_llm/_torch/modules/fused_moe/quantization.py:741,750 and tensorrt_llm/_torch/models/modeling_deepseekv3.py:308,327,1547–1548). Because these scaling factors are non-trainable, replace
module.w3_w1_weight_scaling_factor = nn.Parameter(transfromed_w3_w1_scale, requires_grad=False)
and
module.w2_weight_scaling_factor = nn.Parameter(transfromed_w2_scale, requires_grad=False)
with
module.register_buffer('w3_w1_weight_scaling_factor', transfromed_w3_w1_scale)
and
module.register_buffer('w2_weight_scaling_factor', transfromed_w2_scale)
so they remain in state_dict but are excluded from model.parameters()/optimizers. Verify self.setup_quant_scales is compatible with buffers (or update it).

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fbe325c and 6e0532a.

📒 Files selected for processing (10)
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py (1 hunks)
  • tensorrt_llm/_torch/models/modeling_llama_min_latency.py (3 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_triton.py (1 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (1 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/interface.py (1 hunks)
  • tensorrt_llm/_torch/modules/fused_moe/quantization.py (2 hunks)
  • tensorrt_llm/_torch/modules/linear.py (2 hunks)
  • tensorrt_llm/_torch/pyexecutor/model_engine.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_triton.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
  • tensorrt_llm/_torch/modules/linear.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
  • tensorrt_llm/_torch/modules/fused_moe/interface.py
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/_torch/models/modeling_llama_min_latency.py
  • tensorrt_llm/_torch/modules/fused_moe/quantization.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_triton.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
  • tensorrt_llm/_torch/modules/linear.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
  • tensorrt_llm/_torch/modules/fused_moe/interface.py
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/_torch/models/modeling_llama_min_latency.py
  • tensorrt_llm/_torch/modules/fused_moe/quantization.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_triton.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py
  • tensorrt_llm/_torch/modules/linear.py
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
  • tensorrt_llm/_torch/pyexecutor/model_engine.py
  • tensorrt_llm/_torch/modules/fused_moe/interface.py
  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
  • tensorrt_llm/_torch/models/modeling_llama_min_latency.py
  • tensorrt_llm/_torch/modules/fused_moe/quantization.py
🧠 Learnings (1)
📚 Learning: 2025-09-03T13:16:06.824Z
Learnt from: nvpohanh
PR: NVIDIA/TensorRT-LLM#7478
File: tensorrt_llm/_torch/models/modeling_llama.py:1315-1315
Timestamp: 2025-09-03T13:16:06.824Z
Learning: The Llama4VisionEncoder.load_weights method signature is `def load_weights(self, weights: Dict)` and should not be confused with Llama4ForConditionalGeneration.load_weights which has a different signature including weight_mapper parameter.

Applied to files:

  • tensorrt_llm/_torch/models/modeling_deepseekv3.py
🧬 Code graph analysis (10)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (3)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_triton.py (1)
  • post_load_weights (1389-1390)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)
  • post_load_weights (198-199)
tensorrt_llm/_torch/modules/fused_moe/quantization.py (2)
  • post_load_weights (325-326)
  • post_load_weights (732-752)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_triton.py (6)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)
  • post_load_weights (571-572)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (1)
  • post_load_weights (171-172)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1)
  • post_load_weights (1008-1009)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)
  • post_load_weights (198-199)
tensorrt_llm/_torch/modules/fused_moe/quantization.py (2)
  • post_load_weights (325-326)
  • post_load_weights (732-752)
tensorrt_llm/_torch/modules/linear.py (2)
  • post_load_weights (244-245)
  • post_load_weights (2008-2009)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (8)
tensorrt_llm/_torch/models/modeling_deepseekv3.py (1)
  • post_load_weights (1530-1558)
tensorrt_llm/_torch/models/modeling_llama_min_latency.py (2)
  • post_load_weights (310-317)
  • post_load_weights (567-581)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)
  • post_load_weights (571-572)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_triton.py (1)
  • post_load_weights (1389-1390)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1)
  • post_load_weights (1008-1009)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)
  • post_load_weights (198-199)
tensorrt_llm/_torch/modules/fused_moe/quantization.py (2)
  • post_load_weights (325-326)
  • post_load_weights (732-752)
tensorrt_llm/_torch/modules/linear.py (2)
  • post_load_weights (244-245)
  • post_load_weights (2008-2009)
tensorrt_llm/_torch/modules/linear.py (2)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)
  • post_load_weights (198-199)
tensorrt_llm/_torch/modules/fused_moe/quantization.py (2)
  • post_load_weights (325-326)
  • post_load_weights (732-752)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (6)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)
  • post_load_weights (571-572)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_triton.py (1)
  • post_load_weights (1389-1390)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (1)
  • post_load_weights (171-172)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)
  • post_load_weights (198-199)
tensorrt_llm/_torch/modules/fused_moe/quantization.py (2)
  • post_load_weights (325-326)
  • post_load_weights (732-752)
tensorrt_llm/_torch/modules/linear.py (2)
  • post_load_weights (244-245)
  • post_load_weights (2008-2009)
tensorrt_llm/_torch/pyexecutor/model_engine.py (9)
tensorrt_llm/_torch/models/modeling_deepseekv3.py (1)
  • post_load_weights (1530-1558)
tensorrt_llm/_torch/models/modeling_llama_min_latency.py (2)
  • post_load_weights (310-317)
  • post_load_weights (567-581)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)
  • post_load_weights (571-572)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_triton.py (1)
  • post_load_weights (1389-1390)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (1)
  • post_load_weights (171-172)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1)
  • post_load_weights (1008-1009)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)
  • post_load_weights (198-199)
tensorrt_llm/_torch/modules/fused_moe/quantization.py (2)
  • post_load_weights (325-326)
  • post_load_weights (732-752)
tensorrt_llm/_torch/modules/linear.py (2)
  • post_load_weights (244-245)
  • post_load_weights (2008-2009)
tensorrt_llm/_torch/modules/fused_moe/interface.py (8)
tensorrt_llm/_torch/models/modeling_deepseekv3.py (1)
  • post_load_weights (1530-1558)
tensorrt_llm/_torch/models/modeling_llama_min_latency.py (2)
  • post_load_weights (310-317)
  • post_load_weights (567-581)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)
  • post_load_weights (571-572)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_triton.py (1)
  • post_load_weights (1389-1390)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (1)
  • post_load_weights (171-172)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1)
  • post_load_weights (1008-1009)
tensorrt_llm/_torch/modules/fused_moe/quantization.py (2)
  • post_load_weights (325-326)
  • post_load_weights (732-752)
tensorrt_llm/_torch/modules/linear.py (2)
  • post_load_weights (244-245)
  • post_load_weights (2008-2009)
tensorrt_llm/_torch/models/modeling_deepseekv3.py (10)
tensorrt_llm/_torch/models/modeling_llama_min_latency.py (2)
  • post_load_weights (310-317)
  • post_load_weights (567-581)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)
  • post_load_weights (571-572)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_triton.py (1)
  • post_load_weights (1389-1390)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (1)
  • post_load_weights (171-172)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1)
  • post_load_weights (1008-1009)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)
  • post_load_weights (198-199)
tensorrt_llm/_torch/modules/fused_moe/quantization.py (2)
  • post_load_weights (325-326)
  • post_load_weights (732-752)
tensorrt_llm/module.py (1)
  • named_modules (102-114)
tensorrt_llm/_utils.py (3)
  • is_sm_100f (695-698)
  • shape (955-956)
  • shape (972-973)
tensorrt_llm/quantization/utils/fp8_utils.py (2)
  • resmooth_to_fp8_e8m0 (82-92)
  • transform_sf_into_required_layout (169-217)
tensorrt_llm/_torch/models/modeling_llama_min_latency.py (8)
tensorrt_llm/_torch/models/modeling_deepseekv3.py (1)
  • post_load_weights (1530-1558)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)
  • post_load_weights (571-572)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_triton.py (1)
  • post_load_weights (1389-1390)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (1)
  • post_load_weights (171-172)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1)
  • post_load_weights (1008-1009)
tensorrt_llm/_torch/modules/fused_moe/interface.py (2)
  • post_load_weights (198-199)
  • has_fp8_qdq (287-290)
tensorrt_llm/_torch/modules/fused_moe/quantization.py (3)
  • post_load_weights (325-326)
  • post_load_weights (732-752)
  • FusedMoEQuantScalesFP8 (33-37)
tensorrt_llm/_torch/modules/linear.py (3)
  • post_load_weights (244-245)
  • post_load_weights (2008-2009)
  • has_fp8_qdq (1888-1891)
tensorrt_llm/_torch/modules/fused_moe/quantization.py (6)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)
  • post_load_weights (571-572)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_triton.py (1)
  • post_load_weights (1389-1390)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (1)
  • post_load_weights (171-172)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1)
  • post_load_weights (1008-1009)
tensorrt_llm/_torch/modules/fused_moe/interface.py (1)
  • post_load_weights (198-199)
tensorrt_llm/_torch/modules/linear.py (2)
  • post_load_weights (244-245)
  • post_load_weights (2008-2009)
🪛 Ruff (0.12.2)
tensorrt_llm/_torch/modules/linear.py

244-245: LinearMethodBase.post_load_weights is an empty method in an abstract base class, but has no abstract decorator

(B027)

tensorrt_llm/_torch/modules/fused_moe/quantization.py

325-326: FusedMoEMethodBase.post_load_weights is an empty method in an abstract base class, but has no abstract decorator

(B027)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (8)
tensorrt_llm/_torch/models/modeling_llama_min_latency.py (1)

310-318: LGTM! Clean separation of concerns.

The new post_load_weights method properly handles FP8 QDQ scale adjustments after weights are loaded, which is a cleaner approach than the previous hook-based system.

tensorrt_llm/_torch/modules/fused_moe/fused_moe_triton.py (1)

1389-1390: LGTM! Consistent with other MoE implementations.

The post_load_weights hook properly delegates to the quantization method, maintaining consistency across all fused MoE backends.

tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py (1)

1008-1009: LGTM! Maintains consistent API across MoE implementations.

The addition of post_load_weights completes the pattern across all MoE implementations, ensuring proper post-load processing for quantization-specific adjustments.

tensorrt_llm/_torch/models/modeling_deepseekv3.py (1)

1537-1549: FP8 recipe consistent across codebase.
All calls to transform_sf_into_required_layout use recipe=(1, 128, 128) with is_sfa=False — occurrences: tensorrt_llm/_torch/models/modeling_deepseekv3.py and tensorrt_llm/_torch/modules/fused_moe/quantization.py (two call sites); function defined in tensorrt_llm/quantization/utils/fp8_utils.py.

tensorrt_llm/_torch/modules/fused_moe/fused_moe_trtllm_gen.py (1)

171-173: LGTM: delegates to quantization hook.

Matches the new engine-wide post-load flow.

tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)

571-573: LGTM: consistent post-load hook.

Keeps behavior centralized in quant_method.

tensorrt_llm/_torch/modules/linear.py (1)

2008-2010: LGTM: Linear exposes post-load delegation.

Aligns with engine invocation and quant_method contract.

tensorrt_llm/_torch/modules/fused_moe/quantization.py (1)

732-752: Good integration with the module architecture.

The implementation correctly:

  1. Checks for SM100f architecture before applying transformations
  2. Uses the existing quant_scales tuple to access scale data
  3. Applies the transform_sf_into_required_layout with appropriate parameters
  4. Calls setup_quant_scales to refresh the scales after transformation

This ensures that the DeepSeek model's specific requirements are met while maintaining compatibility with the existing MoE framework.

Signed-off-by: Yuxian Qiu <[email protected]>
@yuxianq
Copy link
Collaborator Author

yuxianq commented Sep 19, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19363 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19360 [ run ] completed with state ABORTED
LLM/main/L0_MergeRequest_PR #14539 (Blue Ocean) completed with status: ABORTED

@yuxianq
Copy link
Collaborator Author

yuxianq commented Sep 22, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19457 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19457 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14626 completed with status: 'FAILURE'

Signed-off-by: Yuxian Qiu <[email protected]>
@yuxianq
Copy link
Collaborator Author

yuxianq commented Sep 22, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19579 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19579 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #14721 completed with status: 'FAILURE'

Signed-off-by: Yuxian Qiu <[email protected]>
@yuxianq
Copy link
Collaborator Author

yuxianq commented Sep 23, 2025

/bot run --stage-list "B200_PCIe-PyTorch-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #19655 [ run ] triggered by Bot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants