__TOC__
== List of compute nodes ==
'''NOTE: Asmodeus and Demogorgon are ordered, but not installed yet, and taints are currently not yet in place.
'''
The following GPU nodes are currently part of the cluster. There are more nodes which act as API servers or provide the Ceph filesystem and web services, but these are not available for standard users.
{| class="wikitable"
|-
! scope="col"| CCU name
! scope="col"| Access
! scope="col"| Platform
! scope="col"| GPUs
! scope="col"| Labels
! scope="col"| Taints
|-
! scope="row"| Asmodeus
| all
| Supermicro
| 4 x A100 HGX 320 GB, subdivided in 16 GPUs @ 20 GB
| gpumem=20, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true
|
|-
! scope="row"| Glasya
| trr161
| Dual Xeon Rack
| 4 x Titan RTX @ 24 GB
| gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm80=true
| gpumem=24:NoSchedule
|-
! scope="row"| Belial
| exc-cb
| Supermicro
| 8 x Quadro RTX 6000 @ 24 GB
| gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true
| gpumem=24:NoSchedule
|-
! scope="row"| Fierna
| exc-cb
| Supermicro
| 8 x Quadro RTX 6000 @ 24 GB
| gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true
| gpumem=24:NoSchedule
|-
! scope="row"| Vecna
| exc-cb, inf
| nVidia DGX-2
| 16 x V100 @ 32 GB
| gpumem=32, gpuarch=nvidia-v100, nvidia-compute-capability-sm80=true
| gpumem=32:NoSchedule
|-
! scope="row"| Zariel
| trr161
| nVidia DGX A100
| 8 x A100 @ 40 GB
| gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true
| gpumem=40:NoSchedule
|-
! scope="row"| Tiamat
| exc-cb
| Supermicro
| 4 x A100 @ 40 GB
| gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true
| gpumem=40:NoSchedule
|-
! scope="row"| Demogorgon
| exc-cb
| Delta
| 8 x A40 @ 48 GB
| gpumem=48, gpuarch=nvidia-a40, nvidia-compute-capability-sm80=true
| gpumem=48:NoSchedule
|-
|}
The CCU name is the internal name used in the Kubernetes cluster, as well as the configured hostname of the node. Nodes are not accessible from the outside world, you have to access the cluster via kubectl through the API-server.
In the column "Access" you can find which Kubernetes user groups can access this node.
{| class="wikitable"
|-
! scope="col"| Group
! scope="col"| Desciption
|-
! scope="row"| exc-cb
| Centre for the Advanced Study of Collective Behaviour
|-
! scope="row"| trr161
| SFB Transregio 161 "Quantitative Methods for Visual Computing"
|-
! scope="row"| inf
| Department of Computer Science
|-
! scope="row"| cvia
| Computer Vision and Image Analysis Group
|-
|}
== Targeting a specific node ==
# ... rest of the specs like before
</syntaxhighlight>
== List of compute nodes ==
'''NOTE: Asmodeus and Demogorgon are ordered, but not installed yet, and taints are currently not yet in place.
'''
The following GPU nodes are currently part of the cluster. There are more nodes which act as API servers or provide the Ceph filesystem and web services, but these are not available for standard users.
{| class="wikitable"
|-
! scope="col"| CCU name
! scope="col"| Access
! scope="col"| Platform
! scope="col"| GPUs
! scope="col"| Labels
! scope="col"| Taints
|-
! scope="row"| Asmodeus
| all
| Supermicro
| 4 x A100 HGX 320 GB, subdivided in 16 GPUs @ 20 GB
| gpumem=20, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true
|
|-
! scope="row"| Glasya
| trr161
| Dual Xeon Rack
| 4 x Titan RTX @ 24 GB
| gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm80=true
| gpumem=24:NoSchedule
|-
! scope="row"| Belial
| exc-cb
| Supermicro
| 8 x Quadro RTX 6000 @ 24 GB
| gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true
| gpumem=24:NoSchedule
|-
! scope="row"| Fierna
| exc-cb
| Supermicro
| 8 x Quadro RTX 6000 @ 24 GB
| gpumem=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true
| gpumem=24:NoSchedule
|-
! scope="row"| Vecna
| exc-cb, inf
| nVidia DGX-2
| 16 x V100 @ 32 GB
| gpumem=32, gpuarch=nvidia-v100, nvidia-compute-capability-sm80=true
| gpumem=32:NoSchedule
|-
! scope="row"| Zariel
| trr161
| nVidia DGX A100
| 8 x A100 @ 40 GB
| gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true
| gpumem=40:NoSchedule
|-
! scope="row"| Tiamat
| exc-cb
| Supermicro
| 4 x A100 @ 40 GB
| gpumem=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true
| gpumem=40:NoSchedule
|-
! scope="row"| Demogorgon
| exc-cb
| Delta
| 8 x A40 @ 48 GB
| gpumem=48, gpuarch=nvidia-a40, nvidia-compute-capability-sm80=true
| gpumem=48:NoSchedule
|-
|}
The CCU name is the internal name used in the Kubernetes cluster, as well as the configured hostname of the node. Nodes are not accessible from the outside world, you have to access the cluster via kubectl through the API-server.
In the column "Access" you can find which Kubernetes user groups can access this node.
{| class="wikitable"
|-
! scope="col"| Group
! scope="col"| Desciption
|-
! scope="row"| exc-cb
| Centre for the Advanced Study of Collective Behaviour
|-
! scope="row"| trr161
| SFB Transregio 161 "Quantitative Methods for Visual Computing"
|-
! scope="row"| inf
| Department of Computer Science
|-
! scope="row"| cvia
| Computer Vision and Image Analysis Group
|-
|}