Skip to content

Conversation

@jlamypoirier
Copy link
Collaborator

No description provided.

PytLab and others added 30 commits January 18, 2024 23:08
Signed-off-by: Selvaraj Anandaraj <[email protected]>
Signed-off-by: jiemingz <[email protected]>
Signed-off-by: Selvaraj Anandaraj <[email protected]>
add is_first_microbatch for TE

See merge request ADLR/megatron-lm!1033
Need a switch at NeMo level to enable Atomic GEMM

See merge request ADLR/megatron-lm!1017
Add distributed checkpoint support to non-TE based models

See merge request ADLR/megatron-lm!1005
Signed-off-by: Selvaraj Anandaraj <[email protected]>
Support for activation offloading to CPU in M-LM

See merge request ADLR/megatron-lm!1016
add rope and swiglu fusion

See merge request ADLR/megatron-lm!946
Add jit_fuser to switch between torch.jit.script and torch.compile

See merge request ADLR/megatron-lm!1036
Run black on megatron/optimizer

See merge request ADLR/megatron-lm!1050
deepakn94 and others added 30 commits February 27, 2024 20:31
…ommunication

Compute norm once per batch (instead of once per microbatch) and once per bucket (instead of once per param)
Fix NaN checking in grads: should be performed before data-parallel all-reduce

See merge request ADLR/megatron-lm!989
Move to Draco OCI

See merge request ADLR/megatron-lm!1137
Print number of transformer and embedding parameters separately

See merge request ADLR/megatron-lm!1159
Mcore LLaVA model

See merge request ADLR/megatron-lm!1151
[OMNIML-614] AMMO ptq + TensorRT-LLM export examples for megatron-lm

See merge request ADLR/megatron-lm!1013
Make throughput and memory footprint formulae compatible with arbitrary ffn_hidden_size

See merge request ADLR/megatron-lm!1169
Experimental Yaml configs

See merge request ADLR/megatron-lm!1134
This reverts commit fe1f23c.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.