CCU:New GPU Cluster
Contents
- 1 Overview
- 2 User namespace, pod security and quotas
- 3 Persistent volume management
- 4 Copy data from the old cluster into the new filesystem
- 5 Login to the new cluster and updating your kubeconfig
- 6 Running the first test container on the new cluster
- 7 Moving your workloads to the new cluster
- 8 Compute nodes
- 9 What you need
- 10 How to get started
- 11 Tips and Tricks
- 12 Reference documents
Overview
In January, the old GPU cluster will gradually be dismantled and integrated into a new Kubernetes cluster. The reason are massive hardware upgrades of the backbone infrastructure:
- New Ceph-based storage cluster with currenly 180TB of NVMe storage to supply all compute nodes with data.
- New network backbone: HDR infiniband (200 GB/s).
- Triple-redundant servers to supply basic services and serve API requests, so that downtime should be minimized.
Since we reinstall everything from scratch, the usage of the Cluster will also change slightly, both for easier access to storage (getting rid of the somewhat cumbersome need to allocate persistent volumes) and improved security (separate user namespaces).
We first provide a comprehensive list of changes in how to use the cluster, then give a detailed manual for how to move over your data and pods.
User namespace, pod security and quotas
Each user works in their own namespace now, which is auto-generated when your login is created. The naming convention is "firstname-lastname". Thus, you need to update your default namespace in the kubeconfig. For security reasons, containers must run with your user id and your user group. To make configuration easy, a pod preset which sets all required options (in addition to mounting basic filesystems) is available in your namespace, see examples below for details.
Otherwise, the security policy for pods is now pretty restrictive, in particular, you can not run containers as root anymore. If this presents problems, please contact me so we can think of a solution.
Finally, there is now a mechanism in place to set resource quotas for individual users. The preset is quite generous at the moment since we have plenty of resources, but if you believe your account is too limited, please contact me.
Persistent volume management
The ceph storage cluster provides a file system which is mounted on every node in the cluster. Pods are allowed to mount a subset of the filesystem as a host path, which is done automatically if you use the preconfigured pod preset in your namespace, see below. The following directories will be mounted in each of your containers:
- /abyss/home: this is your personal home directory which you can use any way you like.
- /abyss/shared: a shared directory where every user has read/write access. It's a standard unix filesystem and everyone has an individual user id but is (for now) in the same user group. Thus, you can set the usual file access permission for directories you create. To not have total anarchy in this filesystem, please give sensible names and organize in subdirectories. For example, put personal files which you want to make accessible to everyone in "/abyss/shared/users/<your-namespace>". I will monitor how it works out and whether we need more rules here.
- /abyss/datasets: directory for static datasets, mounted read-only. These are large general-interest datasets for which we only want to store one copy on the filesystem (no separate imagenets for everyone, please). So whenever you have a well-known public dataset in your shared directory which you think is useful to have in the static tree, please contact me and I move it to the read-only region.
Copy data from the old cluster into the new filesystem
The shared file system can be mounted as a host path on the node "Vecna" on the old cluster, so you can create a pod on Vecna which mounts both the new filesystem as well as your PVs from the old cluster. Please use the following pod configuration as a template and add additional mounts for the PVs your want to copy over:
Afterwards, run a shell in the container and copy your stuff over to /abyss/shared/users/<your-namespace>. Make sure to set a group ownership id of 10000 with rw permissions for the group on everything (rwx for directories) so you have read/write access on the new cluster.
Login to the new cluster and updating your kubeconfig
The frontend for the cluster and login services is located here:
https://ccu-k8s.inf.uni-konstanz.de/
Please follow instructions there to obtain credentials and server information for your kubeconfig.
Running the first test container on the new cluster
Moving your workloads to the new cluster
Compute nodes
See this page for a current list of compute nodes, their hardware, and which groups they serve.
What you need
- An account for the CCU.
- Ideally, a desktop PC with an nVidia GPU to test your code before pushing it to the cluster. However, you can develop for and control the cluster on any machine, it's not mandatory that you can actually run the code locally. Note, however, that it makes debugging harder if you cannot do this (you have to do everything on the console).
- Your PC ideally runs a flavor of Linux, all example scripts were tested against Ubuntu 18.04 (should also work on derivatives, such as Mint 19). If you use Windows, you are on your own.
- Admin access to your own PC to install lots of stuff (or a friendly administrator).
- More specific needs will be covered in the in-depth tutorials.
How to get started
- Preparing your system
- Step 1: Install nVidia CUDA and GPU drivers
- Step 2: Install the nVidia docker system
- Step 3: Link to container registry on our server
- For the impatient: Complete install script for a fresh Ubuntu 18.04
- Learning the basics of Docker
- An in-depth look at a container which trains MNIST using Tensorflow, with the following steps:
- Step 1: create a local python tensorflow application.
- Step 2: wrap the application in a container.
- Step 3: run and test the container locally.
- Step 4: push the container to the registry server of the cluster.
- Step 5: remarks on persistent storage in docker containers
- An in-depth look at a container which trains MNIST using Tensorflow, with the following steps:
- Learning the basics of Kubernetes and how to run jobs on the cluster:
- Step 1: Install the Kubernetes infrastructure
- Step 2: Set up your Kubernetes user account
- Step 3: Run the example container on the cluster and make sure that it works correctly.
- Step 4: Persistent volumes on the GPU cluster
- Step 5: Monitoring with Tensorboard on the GPU cluster
Tips and Tricks