'''Note 2:''' It is not supported to store separate credentials on two different computers. What will happen in this case is that one of them will consume the refresh token, which will then become invalid on the other one. If you need to access the cluster from a second computer, it is advised to use a ssh connection to your primary one where you store the credentials.
== Create a pod to access the file systems ==
After login and adjusting the kubeconfig to the new cluster and user namespace, you should be able to start your first pod. Create a work directory on your machine, and a file "access-pod.yaml" with the following content:
<syntaxhighlight lang="bash">
apiVersion: v1
kind: Pod
metadata:
name: ubuntu-test-pod
spec:
containers:
- name: ubuntu
image: ubuntu:20.04
command: ["sleep", "1d"]
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 1
memory: 1Gi
volumeMounts:
- mountPath: /abyss/home
name: cephfs-home
readOnly: false
- mountPath: /abyss/shared
name: cephfs-shared
readOnly: false
- mountPath: /abyss/datasets
name: cephfs-datasets
readOnly: true
volumes:
- name: cephfs-home
hostPath:
path: "/cephfs/abyss/home/<username>"
type: Directory
- name: cephfs-shared
hostPath:
path: "/cephfs/abyss/shared"
type: Directory
- name: cephfs-datasets
hostPath:
path: "/cephfs/abyss/datasets"
type: Directory
</syntaxhighlight>
When you run this on the cluster, it will create a pod for you which runs a container using the latest Ubuntu container image, and the ceph filesystems mounted into it. Use the following commands to create the pod:
<syntaxhighlight lang="bash">
> kubectl apply -f access-pod.yaml
> kubectl get pods
> kubectl describe pod access-pod
> kubectl exec -it access-pod /bin/bash
$ ls /abyss/home/
</syntaxhighlight>
a shell in the container: Save this into a "test-pod.yaml", start the pod and verify that it has been created correcly and the filesystems have been mounted successfully, for example with the below commands. You can also check whether you can access the data you have copied over and obtain the numeric user- and group-id for filesystem permissions.
== Getting started on the new cluster ==
=== Login to the new cluster and update your kubeconfig ===
The frontend for the cluster and login services is located here:
https://ccu-k8s.inf.uni-konstanz.de/
Please choose "login to the cluster" and enter your credentials to obtain the kubeconfig data. Choose "full kubeconfig" on the left for all the details you need. Either backup your old kubeconfig and use this as a new one, or merge them both into a new kubeconfig which allows you to easily switch context between both clusters. In the beginning, this might be useful as you maybe have forgotten some data, and also still need to clean up once everything works.
A kubeconfig for both clusters has the following structure (note this needs to be saved in "~/.kube/config"):
<syntaxhighlight>
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRV ... <many more characters>
server: https://134.34.224.84:6443
name: ccu-old
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRV ... <many more characters>
server: https://ccu-k8s.inf.uni-konstanz.de:7443
name: ccu-new
contexts:
- context:
cluster: ccu-old
namespace: exc-cb
user: credentials-old
name: ccu-old
- context:
cluster: ccu-new
namespace: <your-namespace>
user: credentials-new
name: ccu-new
current-context: ccu-new
kind: Config
preferences: {}
users:
- name: credentials-old
<all the data below your username returned from the old loginapp goes here>
- name: credentials-new
<all the data below your username returned from the new loginapp goes here>
</syntaxhighlight>
Both the long CA data string and user credentials are returned from the respective loginapps of the clusters. Note: the CA data is different for both clusters, although the first couple of characters are the same. If you have such a kubeconfig for multiple contexts, you can easily switch between the clusters:
<syntaxhighlight>
> kubectl config use-context ccu-old
> <... work with old cluster>
> kubectl config use-context ccu-new
> <... work with new cluster>
</syntaxhighlight>
Defining different contexts is also a good way to switch between namespaces or users (which should not be necessary for the average user).
=== Running the first test container on the new cluster ===
After login and adjusting the kubeconfig to the new cluster and user namespace, you should be able to start your first pod. The following example pod mounts the ceph filesystems into an Ubuntu container image.
<syntaxhighlight lang="bash">
apiVersion: v1
kind: Pod
metadata:
name: ubuntu-test-pod
spec:
containers:
- name: ubuntu
image: ubuntu:20.04
command: ["sleep", "1d"]
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 1
memory: 1Gi
volumeMounts:
- mountPath: /abyss/home
name: cephfs-home
readOnly: false
- mountPath: /abyss/shared
name: cephfs-shared
readOnly: false
- mountPath: /abyss/datasets
name: cephfs-datasets
readOnly: true
volumes:
- name: cephfs-home
hostPath:
path: "/cephfs/abyss/home/<username>"
type: Directory
- name: cephfs-shared
hostPath:
path: "/cephfs/abyss/shared"
type: Directory
- name: cephfs-datasets
hostPath:
path: "/cephfs/abyss/datasets"
type: Directory
</syntaxhighlight>
Save this into a "test-pod.yaml", start the pod and verify that it has been created correcly and the filesystems have been mounted successfully, for example with the below commands. You can also check whether you can access the data you have copied over and obtain the numeric user- and group-id for filesystem permissions.
<syntaxhighlight lang="bash">
> kubectl apply -f test-pod.yaml
> kubectl get pods
> kubectl describe pod ubuntu-test-pod
> kubectl exec -it ubuntu-test-pod /bin/bash
$ ls /abyss/shared/<the directory you created for your data>
$ id
uid=10000 gid=10000 groups=10000
</syntaxhighlight>
=== Moving your workloads to the new cluster ===