== Log in to the cluster and configure kubectl ==
You first need a working version of kubectl on your system. The cluster runs Kubernetes 1.2028.12, the version of kubectl should match this. Check out installation instructions in the [https://kubernetes.io/docs/tasks/tools/install-kubectl/ official Kubernetes documentation].
The login page to the cluster is [https://ccu-k8s.inf.uni-konstanz.de here]. Enter your credentials, you will get back an authorization token. Click on "full kubeconfig" on the left, and copy the content of this to a new file named ".kube/config" in your home directory. Note that the default namespace still has the template name "user-<firstname>-<lastname>". Replace this text with your username, so that your kubeconfig looks like this:
<syntaxhighlight>
> kubectl config use-context ccu-k8s
> kubectl get pods
No resources found in namespace user-your-name.
</syntaxhighlight>
'''Note 2:''' It is not supported to store separate credentials on two different computers. What will happen in this case is that one of them will consume the refresh token, which will then become invalid on the other one. If you need to access the cluster from a second computer, it is advised to use a ssh connection to your primary one where you store the credentials.
== Create a pod to access the file systems ==
After login and adjusting the kubeconfig to the new cluster and user namespace, you should be able to start your first pod. Create a work directory on your machine, and a file "accessubuntu-test-pod.yaml" with the following content:
<syntaxhighlight lang="bash">
- name: cephfs-home
hostPath:
path: "/cephfs/abyss/home/<your-username>"
type: Directory
- name: cephfs-shared
</syntaxhighlight>
When you run this on the cluster, it will create a pod for you which runs a container using the latest Ubuntu container image, and the ceph filesystems mounted into it. Use the following commands to create the podand check out its status:
<syntaxhighlight lang="bash">
> kubectl apply -f accessubuntu-test-pod.yaml
> kubectl get pods
> kubectl describe pod accessubuntu-pod> kubectl exec -it accesstest-pod /bin/bash$ ls /abyss/home/
</syntaxhighlight>
Pay close attention to the event messages given at the end of the "describe pod" command, they give hints what might be wrong if the pod does not start up.
a shell in When the pod finally gets the container: Save this into a status "test-pod.yamlrunning", start you can log into the pod container just as in a remote server to obtain a shell prompt. Do this and verify that it has been created correcly and the filesystems have been mounted successfully, for example with the below commands. You can also check whether you can access the data you have copied over and obtain the numeric user- and group-id for filesystem permissions.:
<syntaxhighlight lang="bash">
> kubectl exec -it ubuntu-test-pod -- /bin/bash
# cd /abyss/home/
# ls
<might already contain stuff which was automatically copied from volumes on the old cluster.
#
</syntaxhighlight>
From within the container, you have access to the internet, can install packages which are still missing, and also copy over your code and data via rsync or pulling it with e.g. git or svn. You can also push stuff into the container from your local machine using kubectl.
<syntaxhighlight lang="bash">
> kubectl cp <my-files> ubuntu-test-pod:/abyss/home/
</syntaxhighlight>
This works also in the other direction to get stuff out of the pod. For more ideas for what you can do with kubectl, which is a powerful and complex tool, please refer to the basic [https://kubernetes.io/docs/reference/kubectl/cheatsheet/ kubectl cheat sheet] or
a more [https://github.com/dennyzhang/cheatsheet-kubernetes-A4 advanced version here].
== Pod configuration on the new cluster == === User namespace, pod security and quotas === Each user works in their own namespace now, which is auto-generated when your login is created. The naming convention is as follows: * Login ID : firstname.lastname* Username : firstname-lastname* Namespace: user-firstname-lastname That means file systems you replace all '.'s in your login ID with a '-' to obtain the username, and prepend "user-" to obtain the namespace. Thus, you should set your default namespace in are mounting into the kubeconfig accordingly, and perhaps have to update pod configurations. For security reasons, containers are forced to run with your own user id and a group id of "10000". These will also be the ids used to create files and directories, and decide the permissions you have available on every node in the file systemcluster. The pod security policy which is active for your namespace will automatically fill in this data. Note that the security policy for pods is very restrictive for now to detect all problematic cases. In particular, you following directories can not switch to root inside containers anymore. Please inform me if security policies disrupt your usual workflow so that we can work something out. Finally, there is now a mechanism in place to set resource quotas for individual users. The preset is quite generous at the moment since we have plenty of resources, but if you believe your account is too limited, please contact me. === Persistent volume management (or lack thereof) ===be used by anyone:
The ceph storage cluster provides a file system which is mounted on every node in the cluster. Pods are allowed to mount a subset of the filesystem as a host path, see the example pod below. The following directories can be mounted: * '''/cephfs/abyss/home/<your-username>''': this is your personal home directory which you can use any way you like.* '''/cephfs/abyss/shared''': a shared directory where every user has read/write access. It's a standard unix filesystem and everyone has an individual user id but , so your data is (for now) in the same user group. You can set the permission for files and directories you create accordingly to restrict not secure here from manipulation or allow accessdeletion. To not have total anarchy in this filesystem, please give sensible names and organize in subdirectories. For example, put personal files which you want to make accessible to everyone in "/abyss/shared/users/<username>". Be considerate towards other users. I will monitor how it works out and whether we need more rules here. If you need more private storage shared only between all members of a trusted work group, please contact me.
* '''/cephfs/abyss/datasets''': directory for static datasets, mounted read-only. These are large general-interest datasets for which we only want to store one copy on the filesystem (no separate imagenets for everyone, please). So whenever you have a well-known public dataset in your shared directory which you think is useful to have in the static tree, please contact me and I move it to the read-only region.
== Copy data from the old cluster into the new filesystem == The shared file system can be mounted as an nfs volume on the node "Vecna" on the old clusterIn addition, so you can create a pod on Vecna which mounts both the new filesystem as well as your PVs from the old cluster. Please use the following pod configuration as a template and add additional mounts for the PVs you want directory local to copy over: <syntaxhighlight>apiVersion: v1kind: Podmetadata: name: <each host, which depending on your-username>-transfer-pod namespace: exc-cbspec: nodeSelector: kubernetes.io/hostname: vecna containers: - name: ubuntu image: ubuntu:20.04 command: ["sleep", "1d"] volumeMounts: - mountPath: /abyss/shared name: workload might be much faster than cephfs-shared readOnly: false volumes: - name: cephfs-shared nfs: path: /cephfs/abyss/shared server: ccu-node1</syntaxhighlight> Afterwards, run a shell in the container and copy your stuff over but also ties you to /abyss/shared/users/<your-username>. Make sure to set a group ownership id of 10000 with rw permissions for the group (rwx for directories) so you have read/write access on the new cluster. The following should do the trickspecific machine: <syntaxhighlight>> kubectl exec -it <your-username>-transfer-pod /bin/bash# cd /cephfs/abyss/shared/users/<your-username># cp -r <all-my-stuff> ./# chgrp -R 10000 *# chown -R 10000 * (replace with your real user ID if you already know it from logging into the new cluster, see below)# chmod -R g+w *</syntaxhighlight>
== Getting started * '''/raid/local-data/<your-username>''': your personal directory on the new cluster ==local SSD raid of the machine. Make sure to set "type: DirectoryOrCreate", at it is not guaranteed to exist yet.
Please refer to [[CCU:Perstistent storage on the Kubernetes cluster|the persistent storage documentation]] for more details.
=== Moving your Running actual workloads to on the new cluster ===
You can now verify that you can start a GPU-enabled pod. Try to create a pod with the following specs to allocate 1 GPU for you somewhere on the cluster. The image we use is provided by nVidia and has Tensorflow/Keras pre-installed. There are many other useful base images around which you can use instead.
<syntaxhighlight>
containers:
- name: gpu-container
image: dockernvcr.io/nvidia/cudatensorflow:1120.009-basetf2-py3
command: ["sleep", "1d"]
resources:
cpu: 1
nvidia.com/gpu: 1
memory: 100Mi10Gi
limits:
cpu: 1
nvidia.com/gpu: 1
memory: 1Gi10Gi volumeMounts: - mountPath: /abyss/home name: cephfs-home readOnly: false - mountPath: /abyss/shared name: cephfs-shared readOnly: false - mountPath: /abyss/datasets name: cephfs-datasets readOnly: true volumes: - name: cephfs-home hostPath: path: "/cephfs/abyss/home/<username>" type: Directory - name: cephfs-shared hostPath: path: "/cephfs/abyss/shared" type: Directory - name: cephfs-datasets hostPath: path: "/cephfs/abyss/datasets" type: Directory
</syntaxhighlight>
See [https://www.nvidia.com/en-us/gpu-cloud/containers/ the catalog of containers by nVidia] for more options for base images (e.g. [https://ngc.nvidia.com/catalog/containers/nvidia:pytorch PyTorch]), or Google around for containers of your favourite application. '''Make sure you only run containers from trusted sources!''' '''Please note (very important): The versions 20.09 of the deep learning frameworks on nvcr.io work on all hosts in the cluster. While there are newer images available, they require drivers >= 455, which are not available for all machines yet. For guaranteed compability, you must stick to 20.09, but you can target a specific host with newer drivers.''' At the bottom of the GPU cluster status page, there is the nvidia-smi output for each node, where you can check individual driver and CUDA version. You can again also switch to a shell in the container and verify GPU capabilities:
<syntaxhighlight>
> kubectl apply -f gpu-pod.yaml... wait until pod is created, check with "kubectl describe pod gpu-pod" or "kubectl get pods"> kubectl exec -it gpu-pod -- /bin/bash$ # nvidia-smi
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
</syntaxhighlight>
To check compabitility with specific nVidia containers, please refer to the [https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html official compatibility matrix]. Note that all nodes have datacenter drivers installed, which should give a large amount of compability. If in doubt, just try it out.
Combine with the volume mounts above, and you already have a working environment. For example, you could transfer some code and data of yours to your home directory, and run it in interactive mode in the container as a quick test. Remember to adjust paths to data sets or to mount the directories in the locations expected by your code.
<syntaxhighlight>
> kubectl exec -it gpu-pod -- /bin/bash
# cd /abyss/home/<your-code-repo>
# python ./main.py
</syntaxhighlight>
Note that there are timeouts in place - this is a demo pod which runs only for 24 hours and an interactive session also has a time limit, so it is better to build a custom run script which is executed when the container in the pod starts. A job is a wrapper for a pod spec, which can for example make sure that the pod is restarted until it has at least one successful completion. This is useful for long deep learning work loads, where a pod failure might happen in between (for example due to a node reboot). See [https://kubernetes.io/docs/concepts/workloads/pods/ Kubernetes docs for pods] or [https://kubernetes.io/docs/concepts/workloads/controllers/job/ jobs] for more details.
If you do not have your code ready, you can do a quick test if GPU execution works by running demo code from [https://github.com/dragen1860/TensorFlow-2.x-Tutorials this tutorial] as follows:
<syntaxhighlight>
> kubectl exec -it gpu-pod -- /bin/bash
# cd /abyss/home
# git clone https://github.com/dragen1860/TensorFlow-2.x-Tutorials.git
# cd TensorFlow-2.x-Tutorials/12_VAE
# ls
README.md images main.py variational_autoencoder.png
# pip3 install pillow matplotlib
# python ./main.py
</syntaxhighlight>
Remember to clean up resources which you are not using anymore, this includes pods and jobs. For example, when your pod has finished what ever it is supposed to be doing, run
<syntaxhighlight>
> kubectl delete -f gpu-pod.yaml
</syntaxhighlight>
using the same manifest file you used to create the resource with kubectl apply.
== Targeting specific nodes and GPU capabilities ==
By default, your pods will be scheduled on the lowest class of GPUs (in terms of memory available, they are mostly still quite decent). Please refer to
[[Cluster:Compute nodes|the documentation on compute nodes]] for information how to target different nodes with higher capability.
== Accessing ports on the pod from your own system ==
Some monitoring tools for deep learning use ports on the pod to convey information via a browser interface, an example being Tensorboard. You can forward these ports to your own local host using kubectl as a proxy. Follow the [https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/ tutorial here] to learn how it works. Syntax for port-forwarding:
<syntaxhighlight>
> kubectl port-forward <pod-name> <dest-port>:<source-port>
</syntaxhighlight>
kubectl will now continue running as a proxy. While it is running, you can access the pod service on "localhost:<dest-port>" in the browser on your own machine. You could even create containers which provide interactive environments via a web interface, e.g. a Jupyter notebook server.
Combine with the volume mounts above, and you already have a working environment. For example== Create, you could transfer some code push and data of yours pull docker images to your home directory, and run it in interactive mode in from the container as a quick test. Note that there are timeouts in place and an interactive session does not last forever, so it is better to build a custom run script which is executed when the container in the pod starts. See the documentation for more details. TODO: link to respective doc.CCU repository ==
=== Cleaning up ===Please follow our tutorial on how to create, push and pull docker images to and from our CCU repository:
Once everything works for you on * [[Tutorials:Link_to_container_registry_on_our_server | How to use the new cluster, please clean up your presence on the old one.CCU image repository]]
In particular:== Mount your custom, or Data Management Plan (DMP) provided, cifs storage ==
* Delete all running pods * Delete all persistent volume claims. This is the most important step, as it shows me which of the local filesystems of the nodes are not in use anymore, so I can transfer the node over [[Tutorials:Mount_cifs_storage_in_a_pod | How to the new cluster.mount cifs storage]]