Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Research GPU use in Docker and Podman on Windows #5972

Open
davidpanderson opened this issue Dec 23, 2024 · 1 comment
Open

Research GPU use in Docker and Podman on Windows #5972

davidpanderson opened this issue Dec 23, 2024 · 1 comment

Comments

@davidpanderson
Copy link
Contributor

davidpanderson commented Dec 23, 2024

Describe the bug

Figure out how to run a CUDA program in Docker on WSL and Windows:

  • What needs to be in the Dockerfile?
  • What's the Docker run command?
  • Does anything special (driver, CUDA toolkit) need to be installed in the WSL image, or on the host (Win)?
  • All the above for Podman

Possibly useful:
https://docs.nvidia.com/ai-enterprise/deployment/vmware/latest/docker.html
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html#installing-the-nvidia-container-toolkit
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/cdi-support.html

And then:
how to access CUDA on Linux and Mac?
how to access AMD GPUs on various platforms?
how to access Apple GPUs on Mac?

@aptalca
Copy link

aptalca commented Dec 29, 2024

For nvidia, container toolkit and the nvidia drivers need to be installed on a linux host. On Windows, docker desktop includes the toolkit.

The container needs to be created with either runtime=nvidia or gpus=all. Then you need the opencl-icd bits in the container.

The linuxserver docker image works with cuda: https://github.com/linuxserver/docker-boinc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Backlog
Development

No branches or pull requests

3 participants