-
Notifications
You must be signed in to change notification settings - Fork 97
Description
Thank you for releasing this amazing work and codebase! I'm currently a beginner working in the area of human-to-humanoid imitation learning, and I'm trying to reproduce your results to better understand the training process and system design.
I've just successfully run the training code (train_hydra.py) for the OmniH2O teacher policy. My current environment is:
GPU: NVIDIA A4000 / V100
CUDA: 12.6
Isaac Gym: Preview 4
OS: Ubuntu 22.04
During training, I noticed that the estimated training time to complete 1M iterations is over 3 weeks to 1 month. I also verified that GPU pipeline is active, but training is still quite slow.
I wanted to ask:
Is this long training time expected/normal for reproducing the full OmniH2O policy?
Or is there something that I might have misconfigured (e.g., physics step time, rendering settings, etc.)?
Thanks!