Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Infer.py code refactored to make it simpler to mantain #54

Open
ValfarDeveloper opened this issue Feb 6, 2025 · 1 comment
Open

Infer.py code refactored to make it simpler to mantain #54

ValfarDeveloper opened this issue Feb 6, 2025 · 1 comment
Labels
enhancement New feature or request

Comments

@ValfarDeveloper
Copy link

Hi everyone!

Here is my small contribution for this project, I refactored completelly the Infer.py file to make it more clear and easier to mantain. Also I added the feature of distribute the Stage 2 between all the available GPUs using a round robin approach (instead of using a manual batch_size specification). I added multiple comments for better understanding, I hope that this helps to improve the infer.py file. On My side the stage 1 is running completelly on CPU, because I had a bad time trying to use device_map="auto", tensor parallel, DDP and torch.nn.DataParallel.

Feel free to use from there anything you need:

https://github.com/ValfarDeveloper/YuE-Refactored/blob/bc0feb64d5b5cac7772a66a2f5e3d397f5c6dfe9/inference/infer.py

@a43992899
Copy link
Collaborator

Thanks! Appreciate it.

@hf-lin Do you mind taking a look?

@a43992899 a43992899 added the enhancement New feature or request label Feb 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants