Skip to content

Conversation

xiaoxi-wangfj
Copy link
Contributor

@xiaoxi-wangfj xiaoxi-wangfj commented Sep 24, 2025

Description

In Megatron-Core + Transformer Engine (TE), we quantize activations to FP8 before the MoE up-projection and then run the dispatch. This is compatible with TE’s FP8 fprop for MoE up-projections. However, the current code path implicitly assumes a GEMM-ready scale layout and ends up transposing scales multiple times.

Pipeline (FP8 forward path)
(1)Quantize
(2)DeepEP dispatch
(3)Create Float8BlockwiseQTensor
(4)permute
(5)GroupedLinear fprop (dequantize)

For Float8BlockQuantizer / Float8BlockwiseQTensor, honor data_format == tex.Float8BlockScaleTensorFormat.COMPACT in this path and keep scales in the compact layout, instead of always materializing rowwise_scale_inv.T.contiguous().
Using COMPACT here eliminates three rowwise_scale_inv.T.contiguous() passes across the pipeline (1) & (3) & (4)
This reduces needless transposes/copies on the critical path and improves end-to-end MFU and reduce peak GPU memory usage during training.

And it can eliminates another two rowwise_scale_inv.T.contiguous() passes across pipeline in step(4) create Float8BlockwiseQTensor and step (5), but this need modify together with GroupedLinear, and this not including this PR.

Type of change

  • Documentation change (change only to the documentation, either a fix or a new content)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Infra/Build change
  • Code refactoring

Changes

Please list the changes introduced in this PR:

  • Support GEMM_READY and COMPACT fp8_scale layout in _moe_permute_mask_map forward and _moe_unpermute_mask_map backward

Checklist:

  • I have read and followed the contributing guidelines
  • The functionality is complete
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

@xiaoxi-wangfj xiaoxi-wangfj marked this pull request as draft September 24, 2025 08:02
@xiaoxi-wangfj xiaoxi-wangfj marked this pull request as ready for review September 24, 2025 08:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant