Why does FFI Calls require a cuda implementation #28521
Unanswered
pseudo-rnd-thoughts
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm adding XLA support to a python/cpp project so have been reading the FFI documentation.
The cpp code all runs on the CPU, I just want to be able to JIT with the python/cpp being unaffected.
I've successfully compiled the code and got it working, however, as soon as I have
jax[cuda12]
installed, then I get the following errorUNIMPLEMENTED: No registered implementation for custom call to "func_name" for platform CUDA
From an uninformed perspective, this doesn't make any sense to me.
cudaStream_t
at the beginning for the cuda implementationI hope someone understands and can help
Code for reference: https://github.com/pseudo-rnd-thoughts/Arcade-Learning-Environment/tree/xla-support
Beta Was this translation helpful? Give feedback.
All reactions