-
Notifications
You must be signed in to change notification settings - Fork 279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple CPUs? #13
Comments
The code will run on multiple CPUs provided you've linked it with the multithreaded builds of BLAS libraries like MKL or OpenBLAS, and have not undertaken steps to disable multithreading (such as through the setting of certain OpenMP environment variables). There is no support at present for multiple GPUs, and we do not foresee adding it to the current Keras+Theano code because Theano itself is being sunset. A future rewrite of this codebase to another framework may possibly support multiple GPUs. |
I'm trying out the demo and started training. It took about 4 hours to complete 1 epoch. Can I stop/restart training and test at any time? By default it's set to run for 200 epochs which for my system would be ~1 month straight. Did you find this amount of training is needed for good performance on musicnet? |
A runtime on the order of 4 hours is consistent with running on CPU. With gpuarray 0.7.5, a P100 GPU and I did not code the Musicnet aspect, so I can't vouch for its resumability. |
Thanks for the suggestion. Do you use latest theano/libgpuarray/pygpu releases? |
@austinmw Approximately the setup for Musicnet: Those are the newest Theano/libgpuarray releases and somewhat old cuDNN/CUDA libraries. Still works very well. |
Thanks, had to edit the .theanorc with cuda path, but got it working with CUDA 9.0! |
Is there any way to run the code on multiple CPUs or GPUs?
The text was updated successfully, but these errors were encountered: