Difference between revisions of "Cluster:Compute nodes"

From Collective Computational Unit
Jump to navigation Jump to search
m (List of compute nodes)
m
 
(42 intermediate revisions by one other user not shown)
Line 1: Line 1:
 
__TOC__
 
__TOC__
  
== Targeting a specific node ==
+
== List of compute nodes ==
  
Targeting a specific node can be done in two different ways:
+
'''NOTE: Imp and Dretch do not have an infiniband connection, so ceph filesystem access is slightly slower. Using the local raid for caching data is recommended.
 +
Both machines (Imp in particular) have much less powerful GPUs than the rest of the cluster, so these two systems are ideal for testing and experimenting.
 +
'''
  
# Selecting a node name.
 
# Requiring a certain label on the node,
 
  
See table below for node names and associated labels.
+
The following GPU nodes are currently part of the cluster. There are more nodes which act as API servers or provide the Ceph filesystem and web services, but these are not available for standard users.
  
Example 1: GPU-enabled pod which runs only on the node "belial":
+
Note: Labels / Taints in this table might be outdated, use "kubectl describe node <name>" for up-to-date information.
 
 
 
 
Example 2: GPU-enabled pod which requires compute capability of at least sm-60:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
== Acquiring GPUs with more than 20 GB ==
 
 
 
By default, Kubernetes schedules GPU pods only on the smallest class of GPU with 20 GB of memory. The way how this is achieved is that nodes with higher grade GPUs are assigned a "node taint", which makes the node only available to pods which specify that they are "tolerant" against the taint.
 
 
 
So if your tasks for example requires a GPU with at least 32 GB, you have to
 
 
 
# make the pod tolerate the taint "gpumem-32" (see table below).
 
# make the pod require the node label "gpumem-32".
 
 
 
Example:
 
 
 
 
 
 
 
 
 
 
 
== List of compute nodes ==
 
 
 
The following nodes are currently part of the cluster. Note that the master node is CPU only and not used for computations, as it hosts all CCU infrastructure (among a few other things).
 
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 52: Line 21:
 
! scope="col"| Taints
 
! scope="col"| Taints
 
|-
 
|-
! scope="row"| Vecna
+
! scope="row"| imp
| exc-cb, inf
+
| all
| nVidia DGX-2
+
| Dual Xeon Rack
| 16 x V100 @ 32 GB
+
| 4 x Titan Xp @ 12 GB
| gpumem-32, nvidia-v100, nvidia-compute-capability-sm80
+
| gpumem=12, gpuarch=nvidia-titan, nvidia-compute-capability-sm70=true
| gpumem-32
+
|  
 
|-
 
|-
! scope="row"| Glasya
+
! scope="row"| dretch
| trr161
+
| all
 
| Dual Xeon Rack
 
| Dual Xeon Rack
 
| 4 x Titan RTX @ 24 GB
 
| 4 x Titan RTX @ 24 GB
| gpumem-24, nvidia-rtx, nvidia-compute-capability-sm80
+
| gpumem=24, gpuarch=nvidia-titan, nvidia-compute-capability-sm70=true
| gpumem-24
+
|  
 
|-
 
|-
! scope="row"| Belial
+
! scope="row"| belial
 
| exc-cb
 
| exc-cb
 
| Supermicro
 
| Supermicro
 
| 8 x Quadro RTX 6000 @ 24 GB
 
| 8 x Quadro RTX 6000 @ 24 GB
| gpumem-24, nvidia-rtx, nvidia-compute-capability-sm75
+
| gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true
| gpumem-24
+
| gpumem=24:NoSchedule
 
|-
 
|-
! scope="row"| Fierna
+
! scope="row"| fierna
 
| exc-cb
 
| exc-cb
 
| Supermicro
 
| Supermicro
 
| 8 x Quadro RTX 6000 @ 24 GB
 
| 8 x Quadro RTX 6000 @ 24 GB
| gpumem-24, nvidia-rtx, nvidia-compute-capability-sm75
+
| gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true
| gpumem-24
+
| gpumem=24:NoSchedule
 
|-
 
|-
! scope="row"| Zariel
+
! scope="row"| vecna
 +
| exc-cb, inf
 +
| nVidia DGX-2
 +
| 16 x V100 @ 32 GB
 +
| gpumem=32, gpuarch=nvidia-v100, nvidia-compute-capability-sm80=true
 +
| gpumem=32:NoSchedule
 +
|-
 +
! scope="row"| zariel
 
| trr161
 
| trr161
 
| nVidia DGX A100
 
| nVidia DGX A100
 
| 8 x A100 @ 40 GB
 
| 8 x A100 @ 40 GB
| gpumem-40, nvidia-a100, nvidia-compute-capability-sm80
+
| gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true
| gpumem-40
+
| gpumem=40:NoSchedule
 
|-
 
|-
! scope="row"| Tiamat
+
! scope="row"| tiamat
 
| exc-cb
 
| exc-cb
 
| Supermicro
 
| Supermicro
 
| 4 x A100 @ 40 GB
 
| 4 x A100 @ 40 GB
| gpumem-40, nvidia-a100, nvidia-compute-capability-sm80
+
| gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true
| gpumem-40
+
| gpumem=40:NoSchedule
 
|-
 
|-
! scope="row"| Asmodeus
+
! scope="row"| asmodeus
 
| all
 
| all
 
| Supermicro
 
| Supermicro
| 4 x A100 HGX 320 GB, subdivided in 16 GPUs @ 20 GB
+
| 4 x A100 HGX 320 GB, subdivided in 8 GPUs @ 40 GB
| gpumem-20, nvidia-a100, nvidia-compute-capability-sm80
+
| gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true
|  
+
| gpumem=40:NoSchedule
 
|-
 
|-
! scope="row"| Demogorgon
+
! scope="row"| demogorgon
 
| exc-cb
 
| exc-cb
 
| Delta
 
| Delta
| 8 x A40 @ 40 GB
+
| 8 x A40 @ 48 GB
| gpumem-40, nvidia-a40, nvidia-compute-capability-sm80
+
| gpumem=48, gpuarch=nvidia-a40, nvidia-compute-capability-sm80=true
| gpumem-40
+
| gpumem=48:NoSchedule
 +
|-
 +
! scope="row"| kiaransalee
 +
| seds
 +
| Delta
 +
| 8 x H100 HGX 640 GB
 +
| gpumem=80, gpuarch=nvidia-h100, nvidia-compute-capability-sm80=true
 +
| gpumem=80:NoSchedule
 
|-
 
|-
 
|}
 
|}
Line 113: Line 96:
 
The CCU name is the internal name used in the Kubernetes cluster, as well as the configured hostname of the node. Nodes are not accessible from the outside world, you have to access the cluster via kubectl through the API-server.
 
The CCU name is the internal name used in the Kubernetes cluster, as well as the configured hostname of the node. Nodes are not accessible from the outside world, you have to access the cluster via kubectl through the API-server.
  
In the column "Access" you can find which Kubernetes user groups can access this node.
+
In the column "Access" you can find which Kubernetes user groups is allowed to access this node. Please only target a specific node if you are allowed to.
  
 
{| class="wikitable"
 
{| class="wikitable"
Line 128: Line 111:
 
! scope="row"| inf
 
! scope="row"| inf
 
| Department of Computer Science
 
| Department of Computer Science
 +
|-
 +
! scope="row"| seds
 +
| Social and Economic Data Sciences
 
|-
 
|-
 
! scope="row"| cvia
 
! scope="row"| cvia
Line 133: Line 119:
 
|-
 
|-
 
|}
 
|}
 +
 +
== Targeting a specific node ==
 +
 +
Targeting a specific node can be done in two different ways, either selecting a node name directly, or requiring certain labels on the node.
 +
See table above for node names and associated labels.
 +
See the [https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/ Kubernetes API documentation on how to assign pods to nodes], or refer to the following examples, which are probably self-explaining.
 +
 +
 +
=== Selecting a node name ===
 +
 +
Example: GPU-enabled pod which runs only on the node "belial". Note that Belial is a more powerful system, so it is protected by a taint, see table above. Thus, you also have to tolerate the respective taint so that the pod can actually be scheduled on Belial, which is explained below.
 +
 +
<syntaxhighlight>
 +
apiVersion: v1
 +
kind: Pod
 +
metadata:
 +
  name: gpu-pod
 +
spec:
 +
  nodeSelector:
 +
    kubernetes.io/hostname: belial
 +
  containers:
 +
  - name: gpu-container
 +
    image: nvcr.io/nvidia/tensorflow:20.09-tf2-py3
 +
    command: ["sleep", "1d"]
 +
    resources:
 +
      requests:
 +
        cpu: 1
 +
        nvidia.com/gpu: 1
 +
        memory: 10Gi
 +
      limits:
 +
        cpu: 1
 +
        nvidia.com/gpu: 1
 +
        memory: 10Gi
 +
  # more specs (volumes etc.)
 +
</syntaxhighlight>
 +
 +
=== Requiring a certain label on the node ===
 +
 +
Example: GPU-enabled pod which requires compute capability of at least sm-75:
 +
 +
<syntaxhighlight>
 +
apiVersion: v1
 +
kind: Pod
 +
metadata:
 +
  name: gpu-pod
 +
spec:
 +
  nodeSelector:
 +
    compute-capability-atleast-sm75: true
 +
    # note: if a node has e.g. the label "compute-capability-sm80", it also has the
 +
    # corresponding "atleast"-label for all lower or equal compute capabilities. Same holds for "gpumem".
 +
  containers:
 +
  - name: gpu-container
 +
    image: nvcr.io/nvidia/tensorflow:20.09-tf2-py3
 +
    command: ["sleep", "1d"]
 +
    resources:
 +
      requests:
 +
        cpu: 1
 +
        nvidia.com/gpu: 1
 +
        memory: 10Gi
 +
      limits:
 +
        cpu: 1
 +
        nvidia.com/gpu: 1
 +
        memory: 10Gi
 +
  # more specs (volumes etc.)
 +
</syntaxhighlight>
 +
 +
== Targeting more powerful GPUs ==
 +
 +
By default, Kubernetes schedules GPU pods only on the smallest class of GPU (nVidia Titan). The way how this is achieved is that nodes with higher grade GPUs are assigned a "node taint", which makes the node only available to pods which specify that they are "tolerant" against the taint.
 +
 +
So if your tasks for example requires a GPU with *exactly* 32 GB, you have to
 +
 +
# make the pod tolerate the taint "gpumem=32:NoSchedule" (see table below).
 +
# make the pod require the node label "gpumem" to be exactly 32.
 +
 +
See the [https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ Kubernetes API documentation on taints and tolerations] for more details.
 +
 +
 +
Example:
 +
<syntaxhighlight>
 +
apiVersion: v1
 +
kind: Pod
 +
metadata:
 +
  name: gpu-pod
 +
spec:
 +
  nodeSelector:
 +
    gpumem: "32"
 +
  tolerations:
 +
  - key: "gpumem"
 +
    # Note: to be able to run on a GPU with any amount of memory,
 +
    # replace the operator/value pair by just 'operator: "Exists"'.
 +
    operator: "Equal"
 +
    value: "32"
 +
    effect: "NoSchedule"
 +
  containers:
 +
  - name: gpu-container
 +
    image: nvcr.io/nvidia/tensorflow:20.09-tf2-py3
 +
    command: ["sleep", "1d"]
 +
    resources:
 +
      requests:
 +
        cpu: 1
 +
        nvidia.com/gpu: 1
 +
        memory: 10Gi
 +
      limits:
 +
        cpu: 1
 +
        nvidia.com/gpu: 1
 +
        memory: 10Gi
 +
  # more specs (volumes etc.)
 +
</syntaxhighlight>
 +
 +
 +
If you need a GPU with *at least* 32 GB, but also would be happy with more, you just can tolerate any amount. Then,
 +
make the pod require the node label "gpumem" to be larger than 31.
 +
 +
Note: typically, you should *not* do this and reserve a GPU which has just enough memory. However, if e.g. all 32 GB GPUs are busy already, you can move up to a 40 GB GPU.
 +
 +
Example:
 +
<syntaxhighlight>
 +
apiVersion: v1
 +
kind: Pod
 +
metadata:
 +
  name: gpu-pod
 +
spec:
 +
  # the standard node selector is insufficient here.
 +
  # needs to use the more expressive "nodeAffinity".
 +
  affinity:
 +
    nodeAffinity:
 +
      requiredDuringSchedulingIgnoredDuringExecution:
 +
        nodeSelectorTerms:
 +
        - matchExpressions:
 +
          - key: gpumem
 +
            operator: Gt
 +
            value: 31
 +
        # note: this also works to specify a minimum compute capability
 +
        - matchExpressions:
 +
          - key: nvidia-compute-capability-sm
 +
            operator: Gt
 +
            value: 79
 +
  tolerations:
 +
  - key: "gpumem"
 +
    operator: "Exists"
 +
    effect: "NoSchedule"
 +
  # ... rest of the specs like before
 +
</syntaxhighlight>

Latest revision as of 13:26, 15 June 2024

List of compute nodes

NOTE: Imp and Dretch do not have an infiniband connection, so ceph filesystem access is slightly slower. Using the local raid for caching data is recommended. Both machines (Imp in particular) have much less powerful GPUs than the rest of the cluster, so these two systems are ideal for testing and experimenting.


The following GPU nodes are currently part of the cluster. There are more nodes which act as API servers or provide the Ceph filesystem and web services, but these are not available for standard users.

Note: Labels / Taints in this table might be outdated, use "kubectl describe node <name>" for up-to-date information.

CCU name Access Platform GPUs Labels Taints
imp all Dual Xeon Rack 4 x Titan Xp @ 12 GB gpumem=12, gpuarch=nvidia-titan, nvidia-compute-capability-sm70=true
dretch all Dual Xeon Rack 4 x Titan RTX @ 24 GB gpumem=24, gpuarch=nvidia-titan, nvidia-compute-capability-sm70=true
belial exc-cb Supermicro 8 x Quadro RTX 6000 @ 24 GB gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true gpumem=24:NoSchedule
fierna exc-cb Supermicro 8 x Quadro RTX 6000 @ 24 GB gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true gpumem=24:NoSchedule
vecna exc-cb, inf nVidia DGX-2 16 x V100 @ 32 GB gpumem=32, gpuarch=nvidia-v100, nvidia-compute-capability-sm80=true gpumem=32:NoSchedule
zariel trr161 nVidia DGX A100 8 x A100 @ 40 GB gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true gpumem=40:NoSchedule
tiamat exc-cb Supermicro 4 x A100 @ 40 GB gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true gpumem=40:NoSchedule
asmodeus all Supermicro 4 x A100 HGX 320 GB, subdivided in 8 GPUs @ 40 GB gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true gpumem=40:NoSchedule
demogorgon exc-cb Delta 8 x A40 @ 48 GB gpumem=48, gpuarch=nvidia-a40, nvidia-compute-capability-sm80=true gpumem=48:NoSchedule
kiaransalee seds Delta 8 x H100 HGX 640 GB gpumem=80, gpuarch=nvidia-h100, nvidia-compute-capability-sm80=true gpumem=80:NoSchedule


The CCU name is the internal name used in the Kubernetes cluster, as well as the configured hostname of the node. Nodes are not accessible from the outside world, you have to access the cluster via kubectl through the API-server.

In the column "Access" you can find which Kubernetes user groups is allowed to access this node. Please only target a specific node if you are allowed to.

Group Desciption
exc-cb Centre for the Advanced Study of Collective Behaviour
trr161 SFB Transregio 161 "Quantitative Methods for Visual Computing"
inf Department of Computer Science
seds Social and Economic Data Sciences
cvia Computer Vision and Image Analysis Group

Targeting a specific node

Targeting a specific node can be done in two different ways, either selecting a node name directly, or requiring certain labels on the node. See table above for node names and associated labels. See the Kubernetes API documentation on how to assign pods to nodes, or refer to the following examples, which are probably self-explaining.


Selecting a node name

Example: GPU-enabled pod which runs only on the node "belial". Note that Belial is a more powerful system, so it is protected by a taint, see table above. Thus, you also have to tolerate the respective taint so that the pod can actually be scheduled on Belial, which is explained below.

apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod
spec:
  nodeSelector:
    kubernetes.io/hostname: belial
  containers:
  - name: gpu-container
    image: nvcr.io/nvidia/tensorflow:20.09-tf2-py3
    command: ["sleep", "1d"]
    resources:
      requests:
        cpu: 1
        nvidia.com/gpu: 1
        memory: 10Gi
      limits:
        cpu: 1
        nvidia.com/gpu: 1
        memory: 10Gi
   # more specs (volumes etc.)

Requiring a certain label on the node

Example: GPU-enabled pod which requires compute capability of at least sm-75:

apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod
spec:
  nodeSelector:
    compute-capability-atleast-sm75: true
    # note: if a node has e.g. the label "compute-capability-sm80", it also has the
    # corresponding "atleast"-label for all lower or equal compute capabilities. Same holds for "gpumem".
  containers:
  - name: gpu-container
    image: nvcr.io/nvidia/tensorflow:20.09-tf2-py3
    command: ["sleep", "1d"]
    resources:
      requests:
        cpu: 1
        nvidia.com/gpu: 1
        memory: 10Gi
      limits:
        cpu: 1
        nvidia.com/gpu: 1
        memory: 10Gi
   # more specs (volumes etc.)

Targeting more powerful GPUs

By default, Kubernetes schedules GPU pods only on the smallest class of GPU (nVidia Titan). The way how this is achieved is that nodes with higher grade GPUs are assigned a "node taint", which makes the node only available to pods which specify that they are "tolerant" against the taint.

So if your tasks for example requires a GPU with *exactly* 32 GB, you have to

  1. make the pod tolerate the taint "gpumem=32:NoSchedule" (see table below).
  2. make the pod require the node label "gpumem" to be exactly 32.

See the Kubernetes API documentation on taints and tolerations for more details.


Example:

apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod
spec:
  nodeSelector:
    gpumem: "32"
  tolerations:
  - key: "gpumem"
    # Note: to be able to run on a GPU with any amount of memory, 
    # replace the operator/value pair by just 'operator: "Exists"'.
    operator: "Equal"
    value: "32"
    effect: "NoSchedule"
  containers:
  - name: gpu-container
    image: nvcr.io/nvidia/tensorflow:20.09-tf2-py3
    command: ["sleep", "1d"]
    resources:
      requests:
        cpu: 1
        nvidia.com/gpu: 1
        memory: 10Gi
      limits:
        cpu: 1
        nvidia.com/gpu: 1
        memory: 10Gi
   # more specs (volumes etc.)


If you need a GPU with *at least* 32 GB, but also would be happy with more, you just can tolerate any amount. Then, make the pod require the node label "gpumem" to be larger than 31.

Note: typically, you should *not* do this and reserve a GPU which has just enough memory. However, if e.g. all 32 GB GPUs are busy already, you can move up to a 40 GB GPU.

Example:

apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod
spec:
  # the standard node selector is insufficient here.
  # needs to use the more expressive "nodeAffinity".
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: gpumem
            operator: Gt
            value: 31
        # note: this also works to specify a minimum compute capability
        - matchExpressions:
          - key: nvidia-compute-capability-sm
            operator: Gt
            value: 79
  tolerations:
  - key: "gpumem"
    operator: "Exists"
    effect: "NoSchedule"
  # ... rest of the specs like before