Changes

Jump to navigation Jump to search

Initializing the Kubernetes cluster

2,737 bytes added, 6 years ago
m
Kubernetes and pre-requisites (every node)
== Master Kubernetes and pre-requisites (every node from scratch ) ==
* Install Kubernetes on Ubuntu 18.04. Assuming version 1.14.2 3 is pulled, check how to fix version. On new systems, copy over the install script from the master node.
<syntaxhighlight lang="bash">
> sudo snap install kubeadm --classiccd init> sudo snap install kubelet --classic> sudo snap install kubectl --classic> sudo apt install rand faketime./install_kubernetes.sh
</syntaxhighlight>
* Create cluster configuration scriptsReconfigure docker runtime.Edit /etc/docker/daemon.json as follows:
<syntaxhighlight lang="bash">
> cd init/templates{# edit cluster information in the following config "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m"> nano make_init_config.sh },> touch /home/kubernetes/.rnd "storage-driver": "overlay2"> ./make_init_config.sh}
</syntaxhighlight>
* Spin up On nodes with an nVidia GPU, add the master node.following:
* Flannel daemonset (node communication)<syntaxhighlight lang="bash"> "default-runtime": "nvidia", "default-shm-size": "1g", "runtimes": { "nvidia": { "path": "nvidia-container-runtime", "runtimeArgs": [] } }</syntaxhighlight>
* nVidia daemonsetRestart docker daemon:<syntaxhighlight lang="bash">> mkdir -p /etc/systemd/system/docker.service.d> systemctl daemon-reload> systemctl restart docker</syntaxhighlight>
* Update kubelet configuration for master nodeMake sure swap is off<syntaxhighlight lang="bash">> sudo swapoff -a</syntaxhighlight>Check /etc/fstab if swap is still configured there, delete if this is the case.
== Spin up the master node ==
Use kubeadm with vanilla defaults to initialize the control plane.
 
<syntaxhighlight lang="bash">
> sudo systemctl enable docker.service
> sudo kubeadm init
</syntaxhighlight>
 
If this fails at any point, use kubeadm reset after problems have been fixed before trying to re-initialize.
* Post-init steps to setup admin user on this account
<syntaxhighlight lang="bash">
> cd init
> ./finalize_master.sh
</syntaxhighlight>
== Update kubelet configuration for master node ==
 
Edit /etc/kubernetes/manifests/kube-controller-manager.yaml:
 
<syntaxhighlight lang="bash">
spec:
containers:
- command:
# add these two
- --allocate-node-cidrs=true
- --cluster-cidr=10.244.0.0/16
</syntaxhighlight>
 
Copy certs/ca.crt (certificate for ccu.uni-konstanz.de) to /usr/share/ca-certificates/ca-dex.pem.
 
Edit /etc/kubernetes/manifests/kube-apiserver.yaml:
 
<syntaxhighlight lang="bash">
spec:
containers:
- command:
# add these five
- --oidc-issuer-url=https://ccu.uni-konstanz.de:32000/dex
- --oidc-client-id=loginapp
- --oidc-ca-file=/usr/share/ca-certificates/ca-dex.pem
- --oidc-username-claim=name
- --oidc-groups-claim=groups
</syntaxhighlight>
 
== Daemonsets on Master node ==
 
=== Flannel daemonset (pod network for communication) ===
 
<syntaxhighlight lang="bash">
> cd init
> ./start_pod_network.sh
</syntaxhighlight>
 
 
=== nVidia daemonset ===
 
<syntaxhighlight lang="bash">
> cd init
> ./deploy_nvidia_device_plugin.sh
</syntaxhighlight>
 
The daemonset should be active on any node with an nVidia GPU.
== Authentication systems ==
The master node should now login to the docker registry of the cluster. <syntaxhighlight lang="bash">> docker login https://ccu.uni-konstanz.de:5000Username: bastian.goldlueckePassword:</syntaxhighlight> Also, we need to provide the read-only secret for the docker registry in every namespace. TODO: howto.  Finally, we need to set up all the rules for rbac. <syntaxhighlight lang="bash">> cd rbac# generate namespaces for user groups> ./generate_namespaces.sh# label all compute nodes for which namespace they serve# (after they are up, needs to be redone when new nodes are added)> ./label_nodes.sh# set up access rights for namespaces> kubectl apply -f rbac.yaml# set up rights for which namespaces can access which compute node> kubectl apply -f node_to_groups.yaml</syntaxhighlight> = DEX with LDAP = Persistent volumes == === Local persistent volumes ===
TODOCheck directory local_storage:* clone the git repository for the provisioner using clone_provisioner.sh (delete first if already here).* install helm: outdatedinstall_helm.sh, switched to containerized DEXget_helm.sh. Do NOT run helm init (unsafe and soon obsolete).* set up and run provisioner: <syntaxhighlight lang="bash">> cd install> generate_config.sh> kubectl apply -f install_storageclass. Check what still needs to be doneyaml> kubectl apply -f install_service.yaml> kubectl apply -f provisioner_generated.yaml</syntaxhighlight>
Set up according to [https:After local persistent volumes on the nodes have been generated in /mnt/github.com/krishnapmv/k8s-ldap this tutorial]with customized install scripts in kubernetes/init/dex/, they should show up under
# Create secrets for TLS connections, use certs for ccu.uni-konstanz.de<syntaxhighlight lang="bash">## Modify ca-cm.yml to contain correct ca.> kubectl get pv## Run upload_ccu_tls.sh# Spin up login application service.## Modify loginapp-cm.yml: server config## Modify loginapp-ing-srv.yml: service data, mapping of ports to outside world## Modify loginapp-deploy.yml: ID secret for TLS## Run start-login-service.sh# Spin up dex## Modify dex-cm.yml: server data and LDAP configuration## Modify dex-ing-srv.yml: service data, mapping of ports to outside world## Modify dex-deploy.yml: ID secret for TLS## Run start-dex-service.sh</syntaxhighlight>

Navigation menu