Skip to content

Conversation

@stemann
Copy link
Contributor

@stemann stemann commented Feb 21, 2022

Currently a WIP. Aims at resolving FluxML/Torch.jl#20.

Similar approach as for ONNXRuntime - separate binaries for CPU-only and CUDA etc. (CPU: #4369, CUDA: #4386).

Relates to: #1529

@vchuravy
Copy link
Member

Does this need to be a different package? Or can we turn this into a variant?

@stemann
Copy link
Contributor Author

stemann commented Feb 22, 2022

Does this need to be a different package? Or can we turn this into a variant?

If eventually both CPU, CUDA and ROCm builds are available, wouldn't the different sets of dependencies necessitate different packages?

@vchuravy
Copy link
Member

You still have the same issue that if Torch.jl includes Torch_jll, TorchCUDA_jll,... all dependencies must be fulfilled. Better to use a platform tag and platform specific dependencies.

@stemann
Copy link
Contributor Author

stemann commented Mar 5, 2022

You still have the same issue that if Torch.jl includes Torch_jll, TorchCUDA_jll,... all dependencies must be fulfilled. Better to use a platform tag and platform specific dependencies.

Right, I see. Is there some discussion/registry of defined/reserved platform tags? I.e. that "cuda" is reserved for CUDA and so on for e.g. "amd_rocm", "intel_oneapi", "apple_coreml", ...

@stemann stemann force-pushed the stemann/torch branch 2 times, most recently from f3eebc1 to 3b803f1 Compare March 5, 2022 15:00
@stemann
Copy link
Contributor Author

stemann commented Sep 4, 2022

Superseded by #4554

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants