__TOC__
== List of compute nodes ==
'''NOTE: Imp and Dretch do not have an infiniband connection, so ceph filesystem access is slightly slower. Using the local raid for caching data is recommended.
Both machines (Imp in particular) have much less powerful GPUs than the rest of the cluster, so these two systems are ideal for testing and experimenting.
'''
The following GPU nodes are currently part of the cluster. There are more nodes which act as API servers or provide the Ceph filesystem and web services, but these are not available for standard users.
Note: Labels / Taints in this table might be outdated, use "kubectl describe node <name>" for up-to-date information.
{| class="wikitable"
|-
! scope="col"| CCU name
! scope="col"| Access
! scope="col"| Platform
! scope="col"| GPUs
! scope="col"| Labels
! scope="col"| Taints
|-
! scope="row"| imp
| all
| Dual Xeon Rack
| 4 x Titan Xp @ 12 GB
| gpumem=12, gpuarch=nvidia-titan, nvidia-compute-capability-sm70=true
|
|-
! scope="row"| dretch
| all
| Dual Xeon Rack
| 4 x Titan RTX @ 24 GB
| gpumem=24, gpuarch=nvidia-titan, nvidia-compute-capability-sm70=true
|
|-
! scope="row"| belial
| exc-cb
| Supermicro
| 8 x Quadro RTX 6000 @ 24 GB
| gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true
| gpumem=24:NoSchedule
|-
! scope="row"| fierna
| exc-cb
| Supermicro
| 8 x Quadro RTX 6000 @ 24 GB
| gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true
| gpumem=24:NoSchedule
|-
! scope="row"| vecna
| exc-cb, inf
| nVidia DGX-2
| 16 x V100 @ 32 GB
| gpumem=32, gpuarch=nvidia-v100, nvidia-compute-capability-sm80=true
| gpumem=32:NoSchedule
|-
! scope="row"| zariel
| trr161
| nVidia DGX A100
| 8 x A100 @ 40 GB
| gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true
| gpumem=40:NoSchedule
|-
! scope="row"| tiamat
| exc-cb
| Supermicro
| 4 x A100 @ 40 GB
| gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true
| gpumem=40:NoSchedule
|-
! scope="row"| asmodeus
| all
| Supermicro
| 4 x A100 HGX 320 GB, subdivided in 8 GPUs @ 40 GB
| gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true
| gpumem=40:NoSchedule
|-
! scope="row"| demogorgon
| exc-cb
| Delta
| 8 x A40 @ 48 GB
| gpumem=48, gpuarch=nvidia-a40, nvidia-compute-capability-sm80=true
| gpumem=48:NoSchedule
|-
! scope="row"| kiaransalee
| seds
| Delta
| 8 x H100 HGX 640 GB
| gpumem=80, gpuarch=nvidia-h100, nvidia-compute-capability-sm80=true
| gpumem=80:NoSchedule
|-
|}
The CCU name is the internal name used in the Kubernetes cluster, as well as the configured hostname of the node. Nodes are not accessible from the outside world, you have to access the cluster via kubectl through the API-server.
In the column "Access" you can find which Kubernetes user groups is allowed to access this node. Please only target a specific node if you are allowed to.
{| class="wikitable"
|-
! scope="col"| Group
! scope="col"| Desciption
|-
! scope="row"| exc-cb
| Centre for the Advanced Study of Collective Behaviour
|-
! scope="row"| trr161
| SFB Transregio 161 "Quantitative Methods for Visual Computing"
|-
! scope="row"| inf
| Department of Computer Science
|-
! scope="row"| seds
| Social and Economic Data Sciences
|-
! scope="row"| cvia
| Computer Vision and Image Analysis Group
|-
|}
== Targeting a specific node ==
Targeting a specific node can be done in two different ways, either selecting a node name directly, or requiring certain labels on the node.
See table below above for node names and associated labels. See the [https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/ Kubernetes API documentation on how to assign pods to nodes], or refer to the following examples, which are probably self-explaining.
=== Selecting a node name ===
Example: GPU-enabled pod which runs only on the node "belial":. Note that Belial is a more powerful system, so it is protected by a taint, see table above. Thus, you also have to tolerate the respective taint so that the pod can actually be scheduled on Belial, which is explained below.
<syntaxhighlight>
# more specs (volumes etc.)
</syntaxhighlight>
=== Requiring a certain label on the node ===
</syntaxhighlight>
== Acquiring Targeting more powerful GPUs with more than 20 GB ==
By default, Kubernetes schedules GPU pods only on the smallest class of GPU with 20 GB of memory(nVidia Titan). The way how this is achieved is that nodes with higher grade GPUs are assigned a "node taint", which makes the node only available to pods which specify that they are "tolerant" against the taint.
So if your tasks for example requires a GPU with *exactly* 32 GB, you have to
# make the pod tolerate the taint "gpumem=32:NoSchedule" (see table below).# make the pod require the node label "gpumem=" to be exactly 32". See the [https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ Kubernetes API documentation on taints and tolerations] for more details.
Example:
<syntaxhighlight>
apiVersion: v1
spec:
nodeSelector:
gpumem=: "32"
tolerations:
- key: "gpumem"
If you need a GPU with *at least* 32 GB, but also would be happy with 40more, you have just can tolerate any amount. Then,make the pod require the node label "gpumem" tobe larger than 31.
# make the pod tolerate the taint "gpumem-32" Note: typically, you should *not*do this and* "gpumem-40"reserve a GPU which has just enough memory. However, if e.g.# make the pod require the node label "gpumem-all 32" *or* "gpumem-GB GPUs are busy already, you can move up to a 40"GB GPU.
Example:
<syntaxhighlight>
apiVersion: v1
- matchExpressions:
- key: gpumem
operator: InGt valuesvalue:31 # note: this also works to specify a minimum compute capability - matchExpressions: - key: nvidia-compute-capability-sm - 32operator: Gt - 40value: 79
tolerations:
- key: "gpumem"
# not sure if this works, maybe you need two tolerations for the # two different values, with an "Equal" operator. operator: "InExists" values: - 32 - 40
effect: "NoSchedule"
# ... rest of the specs like before
</syntaxhighlight>
== List of compute nodes ==
'''NOTE: Asmodeus and Demogorgon are ordered, but not installed yet, and taints are currently not yet in place.
'''
The following nodes are currently part of the cluster. Note that the master node is CPU only and not used for computations, as it hosts all CCU infrastructure (among a few other things).
{| class="wikitable"
|-
! scope="col"| CCU name
! scope="col"| Access
! scope="col"| Platform
! scope="col"| GPUs
! scope="col"| Labels
! scope="col"| Taints
|-
! scope="row"| Asmodeus
| all
| Supermicro
| 4 x A100 HGX 320 GB, subdivided in 16 GPUs @ 20 GB
| gpumem=20, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true
|
|-
! scope="row"| Glasya
| trr161
| Dual Xeon Rack
| 4 x Titan RTX @ 24 GB
| gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm80=true
| gpumem=24:NoSchedule
|-
! scope="row"| Belial
| exc-cb
| Supermicro
| 8 x Quadro RTX 6000 @ 24 GB
| gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true
| gpumem=24:NoSchedule
|-
! scope="row"| Fierna
| exc-cb
| Supermicro
| 8 x Quadro RTX 6000 @ 24 GB
| gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true
| gpumem=24:NoSchedule
|-
! scope="row"| Vecna
| exc-cb, inf
| nVidia DGX-2
| 16 x V100 @ 32 GB
| gpumem=32, gpuarch=nvidia-v100, nvidia-compute-capability-sm80=true
| gpumem=32:NoSchedule
|-
! scope="row"| Zariel
| trr161
| nVidia DGX A100
| 8 x A100 @ 40 GB
| gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true
| gpumem=40:NoSchedule
|-
! scope="row"| Tiamat
| exc-cb
| Supermicro
| 4 x A100 @ 40 GB
| gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true
| gpumem=40:NoSchedule
|-
! scope="row"| Demogorgon
| exc-cb
| Delta
| 8 x A40 @ 48 GB
| gpumem=48, gpuarch=nvidia-a40, nvidia-compute-capability-sm80=true
| gpumem=48:NoSchedule
|-
|}
The CCU name is the internal name used in the Kubernetes cluster, as well as the configured hostname of the node. Nodes are not accessible from the outside world, you have to access the cluster via kubectl through the API-server.
In the column "Access" you can find which Kubernetes user groups can access this node.
{| class="wikitable"
|-
! scope="col"| Group
! scope="col"| Desciption
|-
! scope="row"| exc-cb
| Centre for the Advanced Study of Collective Behaviour
|-
! scope="row"| trr161
| SFB Transregio 161 "Quantitative Methods for Visual Computing"
|-
! scope="row"| inf
| Department of Computer Science
|-
! scope="row"| cvia
| Computer Vision and Image Analysis Group
|-
|}