Changes

Jump to navigation Jump to search

Initializing the Kubernetes cluster

1,838 bytes added, 6 years ago
m
Kubernetes and pre-requisites (every node)
== Master Kubernetes and pre-requisites (every node from scratch ) ==
* Install Kubernetes on Ubuntu 18.04. Assuming version 1.14.2 3 is pulled, check how to fix version. On new systems, copy over the install script from the master node.
<syntaxhighlight lang="bash">
</syntaxhighlight>
* Set up other pre-requisites:** Reconfigure docker runtime. Edit /etc/docker/daemon.json as follows:
<syntaxhighlight lang="bash">
{
}
</syntaxhighlight>
** On nodes with an nVidia GPU, add the following: <syntaxhighlight lang="bash"> "default-runtime": "nvidia", "default-shm-size": "1g", "runtimes": { "nvidia": { "path": "nvidia-container-runtime", "runtimeArgs": [] } }</syntaxhighlight> Restart docker daemon:
<syntaxhighlight lang="bash">
> mkdir -p /etc/systemd/system/docker.service.d
</syntaxhighlight>
** Make sure swap is off
<syntaxhighlight lang="bash">
> sudo swapoff -a
Check /etc/fstab if swap is still configured there, delete if this is the case.
== Spin up the master node == Use kubeadm with vanilla defaults to initialize the control plane. <syntaxhighlight lang="bash">> sudo systemctl enable docker.service> sudo kubeadm init</syntaxhighlight> If this fails at any point, use kubeadm reset after problems have been fixed before trying to re-initialize.  * Create cluster configuration scripts.Post-init steps to setup admin user on this account 
<syntaxhighlight lang="bash">
> cd init/templates# edit cluster information in the following config file> nano make_init_config.sh> touch /home/kubernetes/.rnd> ./make_init_configfinalize_master.sh
</syntaxhighlight>
This will generate the init config from the config template and store it in /home/kubernetes/clusters/ccu.
* Spin up the == Update kubelet configuration for master node== Edit /etc/kubernetes/manifests/kube-controller-manager.yaml: <syntaxhighlight lang="bash">spec: containers: - command: # add these two - --allocate-node-cidrs=true - --cluster-cidr=10.244.0.0/16</syntaxhighlight> Copy certs/ca.crt (certificate for ccu.uni-konstanz.de) to /usr/share/ca-certificates/ca-dex.pem.
Use kubeadm with the generated config to initialize the control planeEdit /etc/kubernetes/manifests/kube-apiserver.yaml:
<syntaxhighlight lang="bash">
> cd spec: containers: - command: # add these five - --oidc-issuer-url=https://ccu.uni-konstanz.de:32000/dex - --oidc-client-id=loginapp - --oidc-ca-file=/homeusr/kubernetesshare/clustersca-certificates/ccuca-dex.pem> sudo systemctl enable docker.service - --oidc-username-claim=name> sudo kubeadm init - --oidc-config kubeadmgroups-claim=groups</syntaxhighlight> == Daemonsets on Master node == === Flannel daemonset (pod network for communication) === <syntaxhighlight lang="bash">> cd init-config> ./start_pod_network.yamlsh
</syntaxhighlight>
* Flannel === nVidia daemonset (node communication)===
* nVidia daemonset<syntaxhighlight lang="bash">> cd init> ./deploy_nvidia_device_plugin.sh</syntaxhighlight>
* Update kubelet configuration for master The daemonset should be active on any nodewith an nVidia GPU.
== Authentication systems ==
The master node should now login to the docker registry of the cluster. <syntaxhighlight lang=== DEX with LDAP ==="bash">> docker login https://ccu.uni-konstanz.de:5000Username: bastian.goldlueckePassword:</syntaxhighlight> Also, we need to provide the read-only secret for the docker registry in every namespace.
TODO: outdated, switched to containerized DEX. Check what still needs to be donehowto.
Set up according to [https://github.com/krishnapmv/k8s-ldap this tutorial]
with customized install scripts in kubernetes/init/dex/
# Create secrets for TLS connectionsFinally, use certs we need to set up all the rules for ccurbac.uni-konstanz.de <syntaxhighlight lang="bash">> cd rbac## Modify ca-cmgenerate namespaces for user groups> .yml to contain correct ca/generate_namespaces.sh#label all compute nodes for which namespace they serve# Run upload_ccu_tls(after they are up, needs to be redone when new nodes are added)> ./label_nodes.sh# Spin set up login application service.access rights for namespaces## Modify loginapp> kubectl apply -cmf rbac.yml: server configyaml## Modify loginappset up rights for which namespaces can access which compute node> kubectl apply -ing-srvf node_to_groups.ymlyaml</syntaxhighlight> == Persistent volumes == === Local persistent volumes === Check directory local_storage: service data, mapping of ports to outside world## Modify loginapp-deploy* clone the git repository for the provisioner using clone_provisioner.sh (delete first if already here).yml* install helm: ID secret for TLS## Run start-login-serviceinstall_helm.sh, get_helm.sh. Do NOT run helm init (unsafe and soon obsolete).# Spin * set up dexand run provisioner: <syntaxhighlight lang="bash">> cd install## Modify dex-cm> generate_config.yml: server data and LDAP configurationsh## Modify dex-ing> kubectl apply -srvf install_storageclass.yml: service data, mapping of ports to outside worldyaml## Modify dex> kubectl apply -deployf install_service.yml: ID secret for TLSyaml## Run start> kubectl apply -dex-servicef provisioner_generated.shyaml</syntaxhighlight> After local persistent volumes on the nodes have been generated in /mnt/kubernetes, they should show up under <syntaxhighlight lang="bash">> kubectl get pv</syntaxhighlight>

Navigation menu