Changes

Jump to navigation Jump to search

CCU:GPU Cluster Quick Start

456 bytes added, 4 years ago
m
Running actual workloads on the cluster
<syntaxhighlight>
> kubectl apply -f gpu-pod.yaml
... wait until pod is created, check with "kubectl describe pod gpu-pod" or "kubectl get pods"
> kubectl exec -it gpu-pod /bin/bash
# nvidia-smi
# python ./main.py
</syntaxhighlight>
 
Remember to clean up resources which you are not using anymore, this includes pods and jobs. For example, when your pod has finished what ever it is supposed to be doing, run
 
<syntaxhighlight>
> kubectl delete -f gpu-pod.yaml
</syntaxhighlight>
 
using the same manifest file you used to create the resource with kubectl apply.
== Accessing ports on the pod from your own system ==

Navigation menu