You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A lot of people in the community use HuggingFace Trainer for training, but sometimes it’s not flexible enough or missing certain features (native tp/pp/ep etc). Migrating to Megatron-LM comes with a steep learning curve, and while TorchTitan is lighter, it still takes some effort to learn and doesn’t fully support features like Flash Attention and Liger Kernel yet (correct me if I’m wrong).
One way to make TorchTitan more accessible could be allowing some of its features to work with existing HuggingFace Trainer code with just minor tweaks—like different parallelisms. That way, more users might give it a try even if it doesn’t fully support all training features yet.
The text was updated successfully, but these errors were encountered:
@huyiwen Thanks for asking. This is something we are considering.
One way to make TorchTitan more accessible could be allowing some of its features to work with existing HuggingFace Trainer code with just minor tweaks—like different parallelisms.
I wonder if you could provide a more concrete list of requirements / to-do items in order to integrate with "HF Trainer"?
I’m currently working on an MoE model and looking to implement expert parallelism. Writing EP/EP+TP/EP+DP from scratch with torch distributed communication is pretty challenging, especially if I want good training speed. That’s why using the parallelism provided by DTensor and torchtitan seems like a solid option.
Beyond EP, it might also be a good idea to enable people to try out PP and TP without relying on Megatron-lm. This way, more people could train larger models with limited resources.
For the most part, torchtitan doesn't implement the parallelisms themselves -- the parallelisms core code are in pytorch.
Is your request that "pytorch should make its parallelisms easily usable in HF Trainer"? In fact AFAIK HF already integrates several of pytorch parallelisms. cc: @kwen2501
Can you be more specific on how would you hope torchtitan to adapt?
A lot of people in the community use HuggingFace Trainer for training, but sometimes it’s not flexible enough or missing certain features (native tp/pp/ep etc). Migrating to Megatron-LM comes with a steep learning curve, and while TorchTitan is lighter, it still takes some effort to learn and doesn’t fully support features like Flash Attention and Liger Kernel yet (correct me if I’m wrong).
One way to make TorchTitan more accessible could be allowing some of its features to work with existing HuggingFace Trainer code with just minor tweaks—like different parallelisms. That way, more users might give it a try even if it doesn’t fully support all training features yet.
The text was updated successfully, but these errors were encountered: