You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here is my small contribution for this project, I refactored completelly the Infer.py file to make it more clear and easier to mantain. Also I added the feature of distribute the Stage 2 between all the available GPUs using a round robin approach (instead of using a manual batch_size specification). I added multiple comments for better understanding, I hope that this helps to improve the infer.py file. On My side the stage 1 is running completelly on CPU, because I had a bad time trying to use device_map="auto", tensor parallel, DDP and torch.nn.DataParallel.
Hi everyone!
Here is my small contribution for this project, I refactored completelly the Infer.py file to make it more clear and easier to mantain. Also I added the feature of distribute the Stage 2 between all the available GPUs using a round robin approach (instead of using a manual batch_size specification). I added multiple comments for better understanding, I hope that this helps to improve the infer.py file. On My side the stage 1 is running completelly on CPU, because I had a bad time trying to use device_map="auto", tensor parallel, DDP and torch.nn.DataParallel.
Feel free to use from there anything you need:
https://github.com/ValfarDeveloper/YuE-Refactored/blob/bc0feb64d5b5cac7772a66a2f5e3d397f5c6dfe9/inference/infer.py
The text was updated successfully, but these errors were encountered: