-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gpu detected in container created by podman run but not by podman compose #25196
Comments
could it be fixed with #25171? Can you please try with the development version of Podman from the git main branch? |
I'm experiencing this same issue. When using
I followed Building the Podman client and client installer on Windows successfully. Confirmed
Further, going into shell for the podman machine, it looks like podman within it is 5.3.2. Is there something I'm missing? This is admittedly my first go building this project. Is there separate way to build podman machine or force the dev version? Finally, I found the this same issue reported on the NVIDIA dev forums but with a note indicating |
The the podman machine wsl image uses a old build process that simply just takes the stable fedora version, so you will need to wait until podman v5.4.0 lands in fedora stable and then until the image gets rebuild. Might take a week or more. I did a quick test with
on main and that worked so the cdi device can be listed as normal device so that should just work once the server is updated to 5.4.0 I think. |
Issue Description
I'm running podman desktop on Windows 11 23H2.
Creating a container using
podman run
detects my gpu correctly.Creating a container using
podman compose
does not.Steps to reproduce the issue
run
podman run -p 11434:11434 -v ollama_ollama_data:/root/.ollama --device nvidia.com/gpu=all --name ollama ollama/ollama:latest
create a compose file
podman compose up
Describe the results you received
Running the container using
podman run
showslevel=INFO source=types.go:131 msg="inference compute" id=GPU-2e3db6bb-6d29-dd8d-c4cb-2e891458ba6c library=cuda variant=v12 compute=8.9 driver=12.8 name="NVIDIA GeForce RTX 4090" total="24.0 GiB" available="22.5 GiB"
Running the container using the compose file shows
level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered"
Describe the results you expected
Detect the gpu for
podman run
andpodman compose up
podman info output
Podman in a container
No
Privileged Or Rootless
Privileged
Upstream Latest Release
Yes
Additional environment details
WSL-Version: 2.3.26.0
Kernelversion: 5.15.167.4-1
WSLg-Version: 1.0.65
MSRDC-Version: 1.2.5620
Direct3D-Version: 1.611.1-81528511
DXCore-Version: 10.0.26100.1-240331-1435.ge-release
Windows-Version: 10.0.22631.4830
Additional information
No response
The text was updated successfully, but these errors were encountered: