Difference between revisions of "Initializing the Kubernetes cluster"

From Collective Computational Unit
Jump to navigation Jump to search
m (Kubernetes and pre-requisites (every node))
m (Kubernetes and pre-requisites (every node))
 
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
== Kubernetes and pre-requisites (every node) ==
 
== Kubernetes and pre-requisites (every node) ==
  
Install Kubernetes on Ubuntu 18.04. Assuming version 1.14.3 is pulled, check how to fix version.
+
Install Kubernetes on Ubuntu 18.04. Assuming version 1.14.3 is pulled, check how to fix version. On new systems, copy over the install script from the master node.
  
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
Line 23: Line 23:
  
 
<syntaxhighlight lang="bash">
 
<syntaxhighlight lang="bash">
 +
  "default-runtime": "nvidia",
 +
  "default-shm-size": "1g",
 
   "runtimes": {
 
   "runtimes": {
 
     "nvidia": {
 
     "nvidia": {
Line 43: Line 45:
 
</syntaxhighlight>
 
</syntaxhighlight>
 
Check /etc/fstab if swap is still configured there, delete if this is the case.
 
Check /etc/fstab if swap is still configured there, delete if this is the case.
 
== Create cluster configuration scripts ==
 
 
OBSOLETE, DOES NOT SEEM TO WORK IN NEW KUBERNETES.
 
 
<syntaxhighlight lang="bash">
 
> cd init/templates
 
# edit cluster information in the following config file
 
> nano make_init_config.sh
 
> touch /home/kubernetes/.rnd
 
> ./make_init_config.sh
 
</syntaxhighlight>
 
 
This will generate the init config from the config template and store it in /home/kubernetes/clusters/ccu.
 
  
 
== Spin up the master node ==
 
== Spin up the master node ==
Line 128: Line 116:
 
== Authentication systems ==
 
== Authentication systems ==
  
=== DEX with LDAP ===
+
The master node should now login to the docker registry of the cluster.
 +
 
 +
<syntaxhighlight lang="bash">
 +
> docker login https://ccu.uni-konstanz.de:5000
 +
Username: bastian.goldluecke
 +
Password:
 +
</syntaxhighlight>
 +
 
 +
Also, we need to provide the read-only secret for the docker registry in every namespace.
  
TODO: outdated, switched to containerized DEX. Check what still needs to be done.
+
TODO: howto.
  
Set up according to [https://github.com/krishnapmv/k8s-ldap this tutorial]
 
with customized install scripts in kubernetes/init/dex/
 
  
# Create secrets for TLS connections, use certs for ccu.uni-konstanz.de
+
Finally, we need to set up all the rules for rbac.
## Modify ca-cm.yml to contain correct ca.
+
 
## Run upload_ccu_tls.sh
+
<syntaxhighlight lang="bash">
# Spin up login application service.
+
> cd rbac
## Modify loginapp-cm.yml: server config
+
# generate namespaces for user groups
## Modify loginapp-ing-srv.yml: service data, mapping of ports to outside world
+
> ./generate_namespaces.sh
## Modify loginapp-deploy.yml: ID secret for TLS
+
# label all compute nodes for which namespace they serve
## Run start-login-service.sh
+
# (after they are up, needs to be redone when new nodes are added)
# Spin up dex
+
> ./label_nodes.sh
## Modify dex-cm.yml: server data and LDAP configuration
+
# set up access rights for namespaces
## Modify dex-ing-srv.yml: service data, mapping of ports to outside world
+
> kubectl apply -f rbac.yaml
## Modify dex-deploy.yml: ID secret for TLS
+
# set up rights for which namespaces can access which compute node
## Run start-dex-service.sh
+
> kubectl apply -f node_to_groups.yaml
 +
</syntaxhighlight>
 +
 
 +
== Persistent volumes ==
 +
 
 +
=== Local persistent volumes ===
 +
 
 +
Check directory local_storage:
 +
* clone the git repository for the provisioner using clone_provisioner.sh (delete first if already here).
 +
* install helm: install_helm.sh, get_helm.sh. Do NOT run helm init (unsafe and soon obsolete).
 +
* set up and run provisioner:
 +
 
 +
<syntaxhighlight lang="bash">
 +
> cd install
 +
> generate_config.sh
 +
> kubectl apply -f install_storageclass.yaml
 +
> kubectl apply -f install_service.yaml
 +
> kubectl apply -f provisioner_generated.yaml
 +
</syntaxhighlight>
 +
 
 +
After local persistent volumes on the nodes have been generated in /mnt/kubernetes, they should show up under
 +
 
 +
<syntaxhighlight lang="bash">
 +
> kubectl get pv
 +
</syntaxhighlight>

Latest revision as of 12:08, 19 June 2019

Kubernetes and pre-requisites (every node)

Install Kubernetes on Ubuntu 18.04. Assuming version 1.14.3 is pulled, check how to fix version. On new systems, copy over the install script from the master node.

> cd init
> ./install_kubernetes.sh

Reconfigure docker runtime. Edit /etc/docker/daemon.json as follows:

{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}

On nodes with an nVidia GPU, add the following:

  "default-runtime": "nvidia",
  "default-shm-size": "1g",
  "runtimes": {
    "nvidia": {
      "path": "nvidia-container-runtime",
      "runtimeArgs": []
    }
  }

Restart docker daemon:

> mkdir -p /etc/systemd/system/docker.service.d
> systemctl daemon-reload
> systemctl restart docker

Make sure swap is off

> sudo swapoff -a

Check /etc/fstab if swap is still configured there, delete if this is the case.

Spin up the master node

Use kubeadm with vanilla defaults to initialize the control plane.

> sudo systemctl enable docker.service
> sudo kubeadm init

If this fails at any point, use kubeadm reset after problems have been fixed before trying to re-initialize.


  • Post-init steps to setup admin user on this account
> cd init
> ./finalize_master.sh


Update kubelet configuration for master node

Edit /etc/kubernetes/manifests/kube-controller-manager.yaml:

spec:
  containers:
  - command:
    # add these two
    - --allocate-node-cidrs=true
    - --cluster-cidr=10.244.0.0/16

Copy certs/ca.crt (certificate for ccu.uni-konstanz.de) to /usr/share/ca-certificates/ca-dex.pem.

Edit /etc/kubernetes/manifests/kube-apiserver.yaml:

spec:
  containers:
  - command:
    # add these five
    - --oidc-issuer-url=https://ccu.uni-konstanz.de:32000/dex
    - --oidc-client-id=loginapp
    - --oidc-ca-file=/usr/share/ca-certificates/ca-dex.pem
    - --oidc-username-claim=name
    - --oidc-groups-claim=groups

Daemonsets on Master node

Flannel daemonset (pod network for communication)

> cd init
> ./start_pod_network.sh


nVidia daemonset

> cd init
> ./deploy_nvidia_device_plugin.sh

The daemonset should be active on any node with an nVidia GPU.

Authentication systems

The master node should now login to the docker registry of the cluster.

> docker login https://ccu.uni-konstanz.de:5000
Username: bastian.goldluecke
Password:

Also, we need to provide the read-only secret for the docker registry in every namespace.

TODO: howto.


Finally, we need to set up all the rules for rbac.

> cd rbac
# generate namespaces for user groups
> ./generate_namespaces.sh
# label all compute nodes for which namespace they serve
# (after they are up, needs to be redone when new nodes are added)
> ./label_nodes.sh
# set up access rights for namespaces
> kubectl apply -f rbac.yaml
# set up rights for which namespaces can access which compute node
> kubectl apply -f node_to_groups.yaml

Persistent volumes

Local persistent volumes

Check directory local_storage:

  • clone the git repository for the provisioner using clone_provisioner.sh (delete first if already here).
  • install helm: install_helm.sh, get_helm.sh. Do NOT run helm init (unsafe and soon obsolete).
  • set up and run provisioner:
> cd install
> generate_config.sh
> kubectl apply -f install_storageclass.yaml
> kubectl apply -f install_service.yaml
> kubectl apply -f provisioner_generated.yaml

After local persistent volumes on the nodes have been generated in /mnt/kubernetes, they should show up under

> kubectl get pv