Changes

Jump to navigation Jump to search

Initializing the Kubernetes cluster

1,727 bytes added, 6 years ago
m
Kubernetes and pre-requisites (every node)
== Master Kubernetes and pre-requisites (every node from scratch ) ==
* Install Kubernetes on Ubuntu 18.04. Assuming version 1.14.2 3 is pulled, check how to fix version. On new systems, copy over the install script from the master node.
<syntaxhighlight lang="bash">
> sudo snap install kubeadm --classiccd init> sudo snap install kubelet --classic> sudo snap install kubectl --classic> sudo apt install rand faketime./install_kubernetes.sh
</syntaxhighlight>
* Set up other pre-requisites:** Reconfigure docker runtime. Edit /etc/docker/daemon.json as follows:
<syntaxhighlight lang="bash">
{
}
</syntaxhighlight>
** On nodes with an nVidia GPU, add the following: <syntaxhighlight lang="bash"> "default-runtime": "nvidia", "default-shm-size": "1g", "runtimes": { "nvidia": { "path": "nvidia-container-runtime", "runtimeArgs": [] } }</syntaxhighlight> Restart docker daemon:
<syntaxhighlight lang="bash">
> mkdir -p /etc/systemd/system/docker.service.d
</syntaxhighlight>
** Make sure swap is off
<syntaxhighlight lang="bash">
> sudo swapoff -a
Check /etc/fstab if swap is still configured there, delete if this is the case.
== Spin up the master node == Use kubeadm with vanilla defaults to initialize the control plane. <syntaxhighlight lang="bash">> sudo systemctl enable docker.service> sudo kubeadm init</syntaxhighlight> If this fails at any point, use kubeadm reset after problems have been fixed before trying to re-initialize.  * Create cluster configuration scripts.Post-init steps to setup admin user on this account 
<syntaxhighlight lang="bash">
> cd init/templates# edit cluster information in the following config file> nano make_init_config.sh> touch /home/kubernetes/.rnd> ./make_init_configfinalize_master.sh
</syntaxhighlight>
This will generate the init config from the config template and store it in /home/kubernetes/clusters/ccu.
* Spin up the == Update kubelet configuration for master node== Edit /etc/kubernetes/manifests/kube-controller-manager.yaml: <syntaxhighlight lang="bash">spec: containers: - command: # add these two - --allocate-node-cidrs=true - --cluster-cidr=10.244.0.0/16</syntaxhighlight> Copy certs/ca.crt (certificate for ccu.uni-konstanz.de) to /usr/share/ca-certificates/ca-dex.pem. Edit /etc/kubernetes/manifests/kube-apiserver.yaml: <syntaxhighlight lang="bash">spec: containers: - command: # add these five - --oidc-issuer-url=https://ccu.uni-konstanz.de:32000/dex - --oidc-client-id=loginapp - --oidc-ca-file=/usr/share/ca-certificates/ca-dex.pem - --oidc-username-claim=name - --oidc-groups-claim=groups</syntaxhighlight> == Daemonsets on Master node ==
Use kubeadm with the generated config to initialize the control plane.=== Flannel daemonset (pod network for communication) ===
<syntaxhighlight lang="bash">
> cd /home/kubernetes/clusters/ccuinit> sudo systemctl enable docker.service> sudo kubeadm init --config kubeadm-init-config/start_pod_network.yamlsh
</syntaxhighlight>
* Flannel === nVidia daemonset (node communication)===
* nVidia daemonset<syntaxhighlight lang="bash">> cd init> ./deploy_nvidia_device_plugin.sh</syntaxhighlight>
* Update kubelet configuration for master The daemonset should be active on any nodewith an nVidia GPU.
== Authentication systems ==
The master node should now login to the docker registry of the cluster. <syntaxhighlight lang="bash">> docker login https://ccu.uni-konstanz.de:5000Username: bastian.goldlueckePassword:</syntaxhighlight> Also, we need to provide the read-only secret for the docker registry in every namespace. TODO: howto.  Finally, we need to set up all the rules for rbac. <syntaxhighlight lang="bash">> cd rbac# generate namespaces for user groups> ./generate_namespaces.sh# label all compute nodes for which namespace they serve# (after they are up, needs to be redone when new nodes are added)> ./label_nodes.sh# set up access rights for namespaces> kubectl apply -f rbac.yaml# set up rights for which namespaces can access which compute node> kubectl apply -f node_to_groups.yaml</syntaxhighlight> = DEX with LDAP =Persistent volumes ==
TODO: outdated, switched to containerized DEX. Check what still needs to be done.=== Local persistent volumes ===
Set up according to [httpsCheck directory local_storage://github* clone the git repository for the provisioner using clone_provisioner.sh (delete first if already here).com/krishnapmv/k8s-ldap this tutorial]with customized * install scripts in kubernetes/helm: install_helm.sh, get_helm.sh. Do NOT run helm init/dex/(unsafe and soon obsolete).* set up and run provisioner:
# Create secrets for TLS connections, use certs for ccu.uni-konstanz.de<syntaxhighlight lang="bash">## Modify ca-cm.yml to contain correct ca.> cd install## Run upload_ccu_tls> generate_config.sh# Spin up login application service> kubectl apply -f install_storageclass.yaml## Modify loginapp> kubectl apply -cmf install_service.yml: server configyaml## Modify loginapp-ing> kubectl apply -srvf provisioner_generated.yml: service data, mapping of ports to outside worldyaml## Modify loginapp-deploy.yml: ID secret for TLS</syntaxhighlight>## Run start-login-service.sh# Spin After local persistent volumes on the nodes have been generated in /mnt/kubernetes, they should show up dexunder## Modify dex-cm.yml: server data and LDAP configuration## Modify dex-ing-srv.yml: service data, mapping of ports to outside world<syntaxhighlight lang="bash">## Modify dex-deploy.yml: ID secret for TLS> kubectl get pv## Run start-dex-service.sh</syntaxhighlight>

Navigation menu