Changes

Jump to navigation Jump to search

Tutorials:Install the nVidia docker system

675 bytes added, 6 years ago
Access the containers on the nVidia GPU cloud
The first time will be slow, as it needs to download all the images, after that, they will be in your local storage and start up much faster.
 You will enter an interactive Python interpreter which runs inside the container. To test whether GPU acceleration works in Tensorflow, you can issue for example the following commands in the interpreter (enter an empy line after each "with" block and take care to copy the right number of spaces in front of the lines as well): <syntaxhighlight lang="bash">import tensorflow as tfwith tf.device('/gpu:0'): a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a') b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b') c = tf.matmul(a, b) with tf.Session() as sess: print (sess.run(c)) </syntaxhighlight> The next tutorial will show how to use these the nVidia GPU cloud images as a basis for your own applications.

Navigation menu