== Kubernetes and pre-requisites (every node) ==
Install Kubernetes on Ubuntu 18.04. Assuming version 1.14.2 3 is pulled, check how to fix version. On new systems, copy over the install script from the master node.
<syntaxhighlight lang="bash">
"storage-driver": "overlay2"
}
</syntaxhighlight>
On nodes with an nVidia GPU, add the following:
<syntaxhighlight lang="bash">
"default-runtime": "nvidia",
"default-shm-size": "1g",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
</syntaxhighlight>
Check /etc/fstab if swap is still configured there, delete if this is the case.
== Create cluster configuration scripts Spin up the master node ==
OBSOLETE, DOES NOT SEEM TO WORK IN NEW KUBERNETESUse kubeadm with vanilla defaults to initialize the control plane.
<syntaxhighlight lang="bash">
> cd init/templates# edit cluster information in the following config file> nano make_init_configsudo systemctl enable docker.shservice> touch /home/kubernetes/.rnd> ./make_init_config.shsudo kubeadm init
</syntaxhighlight>
This will generate the init config from the config template and store it in /home/kubernetes/clusters/ccuIf this fails at any point, use kubeadm reset after problems have been fixed before trying to re-initialize.
== Spin up the master node ==
Use kubeadm with vanilla defaults * Post-init steps to initialize the control plane.setup admin user on this account
<syntaxhighlight lang="bash">
> sudo systemctl enable docker.servicecd init> sudo kubeadm init./finalize_master.sh
</syntaxhighlight>
If this fails at any point, use kubeadm reset after problems have been fixed before trying to re-initialize.
* == Update kubelet configuration for master node==
Edit /etc/kubernetes/manifests/kube-controller-manager.yaml:
</syntaxhighlight>
Copy certs/ca.crt (certificate for ccu.uni-konstanz.de) to /usr/share/ca-certificates/ca-dex.pem. Edit /etc/kubernetes/manifests/kube-controller-managerapiserver.yaml:
<syntaxhighlight lang="bash">
spec:
containers:
- command:
# add these five
- --oidc-issuer-url=https://ccu.uni-konstanz.de:32000/dex
- --oidc-client-id=loginapp
- --oidc-ca-file=/usr/share/ca-certificates/ca-dex.pem
- --oidc-username-claim=name
- --oidc-groups-claim=groups
</syntaxhighlight>
== Daemonsets on Master node ==
=== Flannel daemonset (pod network for communication) ===
<syntaxhighlight lang="bash">
> cd init
> ./start_pod_network.sh
</syntaxhighlight>
* Flannel === nVidia daemonset (pod network for communication)===
* nVidia daemonset<syntaxhighlight lang="bash">> cd init> ./deploy_nvidia_device_plugin.sh</syntaxhighlight>
* Update kubelet configuration for master The daemonset should be active on any nodewith an nVidia GPU.
== Authentication systems ==
The master node should now login to the docker registry of the cluster. <syntaxhighlight lang=== DEX with LDAP ==="bash">> docker login https://ccu.uni-konstanz.de:5000Username: bastian.goldlueckePassword:</syntaxhighlight> Also, we need to provide the read-only secret for the docker registry in every namespace.
TODO: outdated, switched to containerized DEX. Check what still needs to be donehowto.
Set up according to [https://github.com/krishnapmv/k8s-ldap this tutorial]
with customized install scripts in kubernetes/init/dex/
# Create secrets for TLS connectionsFinally, use certs we need to set up all the rules for ccurbac.uni-konstanz.de <syntaxhighlight lang="bash">> cd rbac## Modify ca-cmgenerate namespaces for user groups> .yml to contain correct ca/generate_namespaces.sh#label all compute nodes for which namespace they serve# Run upload_ccu_tls(after they are up, needs to be redone when new nodes are added)> ./label_nodes.sh# Spin set up login application service.access rights for namespaces## Modify loginapp> kubectl apply -cmf rbac.yml: server configyaml## Modify loginappset up rights for which namespaces can access which compute node> kubectl apply -ing-srvf node_to_groups.ymlyaml</syntaxhighlight> == Persistent volumes == === Local persistent volumes === Check directory local_storage: service data, mapping of ports to outside world## Modify loginapp-deploy* clone the git repository for the provisioner using clone_provisioner.sh (delete first if already here).yml* install helm: ID secret for TLS## Run start-login-serviceinstall_helm.sh, get_helm.sh. Do NOT run helm init (unsafe and soon obsolete).# Spin * set up dexand run provisioner: <syntaxhighlight lang="bash">> cd install## Modify dex-cm> generate_config.yml: server data and LDAP configurationsh## Modify dex-ing> kubectl apply -srvf install_storageclass.yml: service data, mapping of ports to outside worldyaml## Modify dex> kubectl apply -deployf install_service.yml: ID secret for TLSyaml## Run start> kubectl apply -dex-servicef provisioner_generated.shyaml</syntaxhighlight> After local persistent volumes on the nodes have been generated in /mnt/kubernetes, they should show up under <syntaxhighlight lang="bash">> kubectl get pv</syntaxhighlight>