-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nvidia GPU support #1
Comments
With CDI support now added to NixOS (NixOS/nixpkgs#284507), GPU access should work in a container. See this thread for details: https://discourse.nixos.org/t/nvidia-gpu-support-in-podman-and-cdi-nvidia-ctk/36286. PodmanAdd the CDI device(s) to jellyfin:
image: lscr.io/linuxserver/jellyfin
container_name: jellyfin
security_opt:
- label=disable
devices:
- nvidia.com/gpu=all Docker
{
virtualisation.docker.daemon.settings = {
features = { cdi = true; };
};
}
jellyfin:
image: lscr.io/linuxserver/jellyfin
container_name: jellyfin
security_opt:
- label=disable
devices:
- nvidia.com/gpu=all |
Any news on the docker side ? I personally added the CDI devices to my docker compose config, when i run |
I personally don’t use Docker. But I did some digging into Docker PRs and found that CDI support is still experimental: moby/moby#47087 To enable it, you’ll first need to set the CDI feature flag: https://docs.docker.com/reference/cli/dockerd/#enable-cdi-devices In your NixOS config, try this:
And you will need to of course pass in “devices” in CDI format as part of your Compose config. |
I'm trying to do my tests using InvokeAI (that's the only GPU-Docker tool i got rn): The compose part looks like this:
Though the docker config.nix doesn't contain the device part, idk if it was ignored because of an error on my side ? |
No error on your side - compose2nix does not yet support Do you have the CDI feature enabled in your Docker config? If not, it's strange how the GPU is detected when running with Docker Compose directly. Two other minor notes:
|
Sounds great to me !
nope, i only got that enabled: |
Ah gotcha, thanks for clarifying! It looks like the feature flag is set by the NixOS module here: https://github.com/NixOS/nixpkgs/blob/nixos-24.05/nixos/modules/services/hardware/nvidia-container-toolkit/default.nix#L72 I'll also update the README with these steps for others who want to get CDI GPU support running in Docker. |
In the meantime, can you please try passing in your devices via the services:
invokeai-cuda:
<<: *invokeai
restart: unless-stopped
devices:
- nvidia.com/gpu=all The change I am making will be doing exactly this, so you'll just have another way to write it in Compose. |
Hey, i made the changes you provided me (thanks) and updated my flake to match your latest commit. Edit: Error log for docker compose in standalone:
EDIT 2: i guess i can make another service with the working standalone docker compose but ngl i don't have the use for that anymore... |
Awesome! If you update/use the latest compose2nix PR I just merged, you can go back to your original config and it should work with compose2nix as well :) See step (2) in the readme section I just added: https://github.com/aksiksi/compose2nix?tab=readme-ov-file#nvidia-gpu-support |
Some random links:
deploy
devices: https://docs.docker.com/compose/compose-file/deploy/#devicesdocker-compose
: Passing gpu withdriver: cdi
is not supported containers/podman#19338The text was updated successfully, but these errors were encountered: