</syntaxhighlight>
Combine with the volume mounts above, and you already have a working environment. For example, you could transfer some code and data of yours to your home directory, and run it in interactive mode in the container as a quick test. Remember to adjust paths to data sets or to mount the directories in the locations expected by your code.
Combine with the volume mounts above, and you already have a working environment<syntaxhighlight>> kubectl exec -it gpu-pod -- /bin/bash# cd /abyss/home/<your-code-repo># python . For example, you could transfer some code and data of yours to your home directory, and run it in interactive mode in the container as a quick test/main. py</syntaxhighlight> Note that there are timeouts in place - this is a demo pod which runs only for 24 hours and an interactive session does not last foreveralso has a time limit, so it is better to build a custom run script which is executed when the container in the pod starts. A job is a wrapper for a pod spec, which can for example make sure that the pod is restarted until it has at least one successful completion. This is useful for long deep learning work loads, where a pod failure might happen in between (for example due to a node reboot). See the documentation [https://kubernetes.io/docs/concepts/workloads/pods/ Kubernetes docs for pods] or [https://kubernetes.io/docs/concepts/workloads/controllers/job/ jobs] for more details. TODO If you do not have your code ready, you can do a quick test if GPU execution works by running demo code from [https://github.com/dragen1860/TensorFlow-2.x-Tutorials this tutorial] as follows: <syntaxhighlight>> kubectl exec -it gpu-pod -- /bin/bash# cd /abyss/home# git clone https: link to respective doc//github.com/dragen1860/TensorFlow-2.x-Tutorials.git# cd TensorFlow-2.x-Tutorials/12_VAE# lsREADME.md images main.py variational_autoencoder.png# pip3 install pillow matplotlib# python ./main.py</syntaxhighlight>