Changes

Jump to navigation Jump to search

Tutorials:Install the nVidia docker system

752 bytes added, 6 years ago
no edit summary
then everything is fine. When you are done testing, just enter the line "quit()" to exit the Python interpreter, which will also terminate the container.
 
 
 
== Making nvidia-docker the default runtime (optional) ==
 
You will note that you always had to select "--runtime=nvidia" for every docker command which runs a GPU container. This is fine in principle, but if you do it on a regular basis (or if you want to set up your own kubernetes minikube for testing), you might wish to make this the default.
 
For this, edit /etc/docker/daemon.json to look as follows (it will also configure a larger amount of memory, which is recommended for tensorflow):
 
<syntaxhighlight lang="json">
{
"default-runtime": "nvidia",
"default-shm-size": "1g",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}
</syntaxhighlight>
You are now ready for the next tutorial, which will show you how to use the nVidia GPU cloud images as a basis for your own applications.
 
 
 
 
[[Category:Tutorials]]

Navigation menu