Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Instructions to run on Mac #51

Open
fire17 opened this issue Feb 5, 2025 · 2 comments
Open

Instructions to run on Mac #51

fire17 opened this issue Feb 5, 2025 · 2 comments

Comments

@fire17
Copy link

fire17 commented Feb 5, 2025

Hi there , really excited about open source music gen :)
Thanks for the awesome work!

Was wondering how to run this on a Mac (currently got the m3 max)
Couldn't find instructions in the readme

Thanks a lot and all the best

Ps - also can't wait to see music gen speed getting faster that RT - ie 30 seconds of audio in less than 30 seconds.. or just a pure Omni RealTime music model, to generate music continuesly, like a player, or dj
I'm betting on a year or less haha, that would be epic

@a43992899
Copy link
Collaborator

Related issue: #2

I need to export the model to llama.cpp, implement the audio tokenizer in cpp.

@thinkyhead
Copy link

It appears the infer.py script will need to be extended to use MPS where it currently only uses CUDA. Not all CUDA features are available on MPS, so there may be some blocks there. Parts of TorchAudio are also dependent on CUDA for speed, so some parts of the generation may just have to be run on CPU. If it can be made to work by simple substitution of "mps" for "cuda" in the script, that would be great! But, it won't be that easy, and we need some smarties to come along to help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants