Cluster:Compute nodes

From Collective Computational Unit
Revision as of 23:12, 9 February 2022 by Bastian.goldluecke (talk | contribs) (List of compute nodes)
Jump to navigation Jump to search

List of compute nodes

NOTE: Glasya is ordered, but not installed yet. Taints are currently in place for Asmodeus, Vecna and Zariel (which has now been repaired).

NOTE: Imp and Dretch do not have an infiniband connection, so ceph filesystem access is slightly slower. Using the local raid for caching data is recommended. Both machines (Imp in particular) have much less powerful GPUs than the rest of the cluster, so these two systems are ideal for testing and experimenting.


The following GPU nodes are currently part of the cluster. There are more nodes which act as API servers or provide the Ceph filesystem and web services, but these are not available for standard users.

CCU name Access Platform GPUs Labels Taints
Imp all Dual Xeon Rack 4 x Titan Xp @ 12 GB gpumem=12, gpuarch=nvidia-titan, nvidia-compute-capability-sm70=true
Dretch all Dual Xeon Rack 4 x Titan RTX @ 24 GB gpumem=24, gpuarch=nvidia-titan, nvidia-compute-capability-sm70=true
Belial exc-cb Supermicro 8 x Quadro RTX 6000 @ 24 GB gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true gpumem=24:NoSchedule
Fierna exc-cb Supermicro 8 x Quadro RTX 6000 @ 24 GB gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true gpumem=24:NoSchedule
Vecna exc-cb, inf nVidia DGX-2 16 x V100 @ 32 GB gpumem=32, gpuarch=nvidia-v100, nvidia-compute-capability-sm80=true gpumem=32:NoSchedule
Zariel trr161 nVidia DGX A100 8 x A100 @ 40 GB gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true gpumem=40:NoSchedule
Tiamat exc-cb Supermicro 4 x A100 @ 40 GB gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true gpumem=40:NoSchedule
Asmodeus all Supermicro 4 x A100 HGX 320 GB, subdivided in 8 GPUs @ 40 GB gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true gpumem=40:NoSchedule
Glasya exc-cb Delta 8 x A40 @ 48 GB gpumem=48, gpuarch=nvidia-a40, nvidia-compute-capability-sm80=true gpumem=48:NoSchedule


The CCU name is the internal name used in the Kubernetes cluster, as well as the configured hostname of the node. Nodes are not accessible from the outside world, you have to access the cluster via kubectl through the API-server.

In the column "Access" you can find which Kubernetes user groups can access this node.

Group Desciption
exc-cb Centre for the Advanced Study of Collective Behaviour
trr161 SFB Transregio 161 "Quantitative Methods for Visual Computing"
inf Department of Computer Science
cvia Computer Vision and Image Analysis Group

Targeting a specific node

Targeting a specific node can be done in two different ways, either selecting a node name directly, or requiring certain labels on the node. See table above for node names and associated labels. See the Kubernetes API documentation on how to assign pods to nodes, or refer to the following examples, which are probably self-explaining.


Selecting a node name

Example: GPU-enabled pod which runs only on the node "belial". Note that Belial is a more powerful system, so it is protected by a taint, see table above. Thus, you also have to tolerate the respective taint so that the pod can actually be scheduled on Belial, which is explained below.

apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod
spec:
  nodeSelector:
    kubernetes.io/hostname: belial
  containers:
  - name: gpu-container
    image: nvcr.io/nvidia/tensorflow:20.09-tf2-py3
    command: ["sleep", "1d"]
    resources:
      requests:
        cpu: 1
        nvidia.com/gpu: 1
        memory: 10Gi
      limits:
        cpu: 1
        nvidia.com/gpu: 1
        memory: 10Gi
   # more specs (volumes etc.)

Requiring a certain label on the node

Example: GPU-enabled pod which requires compute capability of at least sm-75:

apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod
spec:
  nodeSelector:
    compute-capability-atleast-sm75: true
    # note: if a node has e.g. the label "compute-capability-sm80", it also has the
    # corresponding "atleast"-label for all lower or equal compute capabilities. Same holds for "gpumem".
  containers:
  - name: gpu-container
    image: nvcr.io/nvidia/tensorflow:20.09-tf2-py3
    command: ["sleep", "1d"]
    resources:
      requests:
        cpu: 1
        nvidia.com/gpu: 1
        memory: 10Gi
      limits:
        cpu: 1
        nvidia.com/gpu: 1
        memory: 10Gi
   # more specs (volumes etc.)

Targeting more powerful GPUs

By default, Kubernetes schedules GPU pods only on the smallest class of GPU (nVidia Titan). The way how this is achieved is that nodes with higher grade GPUs are assigned a "node taint", which makes the node only available to pods which specify that they are "tolerant" against the taint.

So if your tasks for example requires a GPU with *exactly* 32 GB, you have to

  1. make the pod tolerate the taint "gpumem=32:NoSchedule" (see table below).
  2. make the pod require the node label "gpumem" to be exactly 32.

See the Kubernetes API documentation on taints and tolerations for more details.


Example:

apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod
spec:
  nodeSelector:
    gpumem: "32"
  tolerations:
  - key: "gpumem"
    # Note: to be able to run on a GPU with any amount of memory, 
    # replace the operator/value pair by just 'operator: "Exists"'.
    operator: "Equal"
    value: "32"
    effect: "NoSchedule"
  containers:
  - name: gpu-container
    image: nvcr.io/nvidia/tensorflow:20.09-tf2-py3
    command: ["sleep", "1d"]
    resources:
      requests:
        cpu: 1
        nvidia.com/gpu: 1
        memory: 10Gi
      limits:
        cpu: 1
        nvidia.com/gpu: 1
        memory: 10Gi
   # more specs (volumes etc.)


If you need a GPU with *at least* 32 GB, but also would be happy with more, you just can tolerate any amount. Then, make the pod require the node label "gpumem" to be larger than 31.

Note: typically, you should *not* do this and reserve a GPU which has just enough memory. However, if e.g. all 32 GB GPUs are busy already, you can move up to a 40 GB GPU.

Example:

apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod
spec:
  # the standard node selector is insufficient here.
  # needs to use the more expressive "nodeAffinity".
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: gpumem
            operator: Gt
            value: 31
        # note: this also works to specify a minimum compute capability
        - matchExpressions:
          - key: nvidia-compute-capability-sm
            operator: Gt
            value: 79
  tolerations:
  - key: "gpumem"
    operator: "Exists"
    effect: "NoSchedule"
  # ... rest of the specs like before