Changes

Jump to navigation Jump to search

Tutorials:Install the nVidia docker system

1,728 bytes added, 6 years ago
no edit summary
</syntaxhighlight>
After that, it is at least necessary to log out and back in to update the privileges, but a reboot also can not hurt.
== Installing the nVidia docker extension ==
The default docker installation is not able to talk to the nVidia GPUs present in your system. Thus, you have to install an extension by nVidia which allows it to do so.Run the following script: <syntaxhighlight lang="bash">#!/bin/bashcurl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | \ sudo apt-key add - # hard-coded distro ID so that it also works on Ubuntu flavors like Mint# ubuntu16.04 is also available, maybe some other versions (see above github)distribution= TODO ubuntu18.04 curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | \ sudo tee /etc/apt/sources.list.d/nvidia-docker.listsudo apt-get update # Install nvidia-docker2 and reload the Docker daemon configurationsudo apt-get install -y nvidia-docker2sudo pkill -SIGHUP dockerd</syntaxhighlight>  That's it, your system is now configured to run nVidia's base containers for GPU utilization.To test it and actually run your first container, try out: <syntaxhighlight lang="bash">docker run --runtime=nvidia --rm nvidia/cuda:9.0-devel nvcc --version</syntaxhighlight> This will pull the docker container with CUDA 9.0 and run the command "nvcc --version" inside it. In effect, you should see a similar output as on your own system, but with a different version of CUDA displayed. You can also try to run nvidia-smi inside the container: <syntaxhighlight lang="bash">docker run --runtime=nvidia --rm nvidia/cuda:9.0-devel nvidia-smi</syntaxhighlight> This should show a similar output as when you run it directly, i.e. show the same graphics card(s).

Navigation menu