Changes

Jump to navigation Jump to search

Cluster:Compute nodes

4,641 bytes added, 1 year ago
m
no edit summary
__TOC__
== Targeting a specific node == Targeting a specific node can be done in two different ways: 1. Selecting a node name.2. Requiring a certain label on the node,  See table below for node names and associated labels. Example 1: GPU-enabled pod which runs only on the node "belial":  Example 2: GPU-enabled pod which requires List of compute capability of at least sm-60:          =nodes = Acquiring GPUs with more than 20 GB == By default, Kubernetes schedules GPU pods only on the smallest class of GPU with 20 GB of memory. The way how this is achieved is that nodes with higher grade GPUs are assigned a "node taint", which makes the node only available to pods which specify that they are "tolerant" against the taint. So if your tasks for example requires a GPU with at least 32 GB, you have to 1. make the pod tolerate the taint "gpumem-32" (see table below).2. make the pod require the node label "gpumem-32". Example:  
'''NOTE: Imp and Dretch do not have an infiniband connection, so ceph filesystem access is slightly slower. Using the local raid for caching data is recommended.
Both machines (Imp in particular) have much less powerful GPUs than the rest of the cluster, so these two systems are ideal for testing and experimenting.
'''
== List The following GPU nodes are currently part of compute the cluster. There are more nodes ==which act as API servers or provide the Ceph filesystem and web services, but these are not available for standard users.
The following nodes are currently part of the cluster. Note that the master : Labels / Taints in this table might be outdated, use "kubectl describe node is CPU only and not used <name>" for computations, as it hosts all CCU infrastructure (among a few other things)up-to-date information.
{| class="wikitable"
! scope="col"| Taints
|-
! scope="row"| Vecnaimp| exc-cb, infall| nVidia DGX-2Dual Xeon Rack| 16 4 x V100 Titan Xp @ 32 12 GB| gpumem-32=12, gpuarch=nvidia-v100titan, nvidia-compute-capability-sm80sm70=true| gpumem-32
|-
! scope="row"| Glasyadretch| trr161all
| Dual Xeon Rack
| 4 x Titan RTX @ 24 GB
| gpumem-=24, gpuarch=nvidia-rtxtitan, nvidia-compute-capability-sm80sm70=true| gpumem-24
|-
! scope="row"| Belialbelial
| exc-cb
| Supermicro
| 8 x Quadro RTX 6000 @ 24 GB
| gpumem-=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true| gpumem-=24:NoSchedule
|-
! scope="row"| Fiernafierna
| exc-cb
| Supermicro
| 8 x Quadro RTX 6000 @ 24 GB
| gpumem-=24, gpuarch=nvidia-rtx, nvidia-compute-capability-sm75=true| gpumem=24:NoSchedule|-! scope="row"| vecna| exc-cb, inf| nVidia DGX-2| 16 x V100 @ 32 GB| gpumem=32, gpuarch=nvidia-v100, nvidia-compute-capability-24sm80=true| gpumem=32:NoSchedule
|-
! scope="row"| Zarielzariel
| trr161
| nVidia DGX A100
| 8 x A100 @ 40 GB
| gpumem-=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true| gpumem-=40:NoSchedule
|-
! scope="row"| Tiamattiamat
| exc-cb
| Supermicro
| 4 x A100 @ 40 GB
| gpumem-=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true| gpumem-=40:NoSchedule
|-
! scope="row"| Asmodeusasmodeus| cviaall
| Supermicro
| 4 x A100 HGX 320 GB, subdivided in 16 8 GPUs @ 20 40 GB| gpumem-20=40, gpuarch=nvidia-a100, nvidia-compute-capability-sm80=true| gpumem=40:NoSchedule
|-
! scope="row"| Demogorgondemogorgon
| exc-cb
| Delta
| 8 x A40 @ 40 48 GB| gpumem-40=48, gpuarch=nvidia-a40, nvidia-compute-capability-sm80=true| gpumem=48:NoSchedule|-40! scope="row"| kiaransalee| seds| Delta| 8 x H100 HGX 640 GB| gpumem=80, gpuarch=nvidia-h100, nvidia-compute-capability-sm80=true| gpumem=80:NoSchedule
|-
|}
The CCU name is the internal name used in the Kubernetes cluster, as well as the configured hostname of the node. Nodes are not accessible from the outside world, you have to access the cluster via kubectl through the API-server.
In the column "Access" you can find which Kubernetes user groups can is allowed to access this node. Please only target a specific node if you are allowed to.
{| class="wikitable"
! scope="row"| inf
| Department of Computer Science
|-
! scope="row"| seds
| Social and Economic Data Sciences
|-
! scope="row"| cvia
|-
|}
 
== Targeting a specific node ==
 
Targeting a specific node can be done in two different ways, either selecting a node name directly, or requiring certain labels on the node.
See table above for node names and associated labels.
See the [https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/ Kubernetes API documentation on how to assign pods to nodes], or refer to the following examples, which are probably self-explaining.
 
 
=== Selecting a node name ===
 
Example: GPU-enabled pod which runs only on the node "belial". Note that Belial is a more powerful system, so it is protected by a taint, see table above. Thus, you also have to tolerate the respective taint so that the pod can actually be scheduled on Belial, which is explained below.
 
<syntaxhighlight>
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod
spec:
nodeSelector:
kubernetes.io/hostname: belial
containers:
- name: gpu-container
image: nvcr.io/nvidia/tensorflow:20.09-tf2-py3
command: ["sleep", "1d"]
resources:
requests:
cpu: 1
nvidia.com/gpu: 1
memory: 10Gi
limits:
cpu: 1
nvidia.com/gpu: 1
memory: 10Gi
# more specs (volumes etc.)
</syntaxhighlight>
 
=== Requiring a certain label on the node ===
 
Example: GPU-enabled pod which requires compute capability of at least sm-75:
 
<syntaxhighlight>
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod
spec:
nodeSelector:
compute-capability-atleast-sm75: true
# note: if a node has e.g. the label "compute-capability-sm80", it also has the
# corresponding "atleast"-label for all lower or equal compute capabilities. Same holds for "gpumem".
containers:
- name: gpu-container
image: nvcr.io/nvidia/tensorflow:20.09-tf2-py3
command: ["sleep", "1d"]
resources:
requests:
cpu: 1
nvidia.com/gpu: 1
memory: 10Gi
limits:
cpu: 1
nvidia.com/gpu: 1
memory: 10Gi
# more specs (volumes etc.)
</syntaxhighlight>
 
== Targeting more powerful GPUs ==
 
By default, Kubernetes schedules GPU pods only on the smallest class of GPU (nVidia Titan). The way how this is achieved is that nodes with higher grade GPUs are assigned a "node taint", which makes the node only available to pods which specify that they are "tolerant" against the taint.
 
So if your tasks for example requires a GPU with *exactly* 32 GB, you have to
 
# make the pod tolerate the taint "gpumem=32:NoSchedule" (see table below).
# make the pod require the node label "gpumem" to be exactly 32.
 
See the [https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ Kubernetes API documentation on taints and tolerations] for more details.
 
 
Example:
<syntaxhighlight>
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod
spec:
nodeSelector:
gpumem: "32"
tolerations:
- key: "gpumem"
# Note: to be able to run on a GPU with any amount of memory,
# replace the operator/value pair by just 'operator: "Exists"'.
operator: "Equal"
value: "32"
effect: "NoSchedule"
containers:
- name: gpu-container
image: nvcr.io/nvidia/tensorflow:20.09-tf2-py3
command: ["sleep", "1d"]
resources:
requests:
cpu: 1
nvidia.com/gpu: 1
memory: 10Gi
limits:
cpu: 1
nvidia.com/gpu: 1
memory: 10Gi
# more specs (volumes etc.)
</syntaxhighlight>
 
 
If you need a GPU with *at least* 32 GB, but also would be happy with more, you just can tolerate any amount. Then,
make the pod require the node label "gpumem" to be larger than 31.
 
Note: typically, you should *not* do this and reserve a GPU which has just enough memory. However, if e.g. all 32 GB GPUs are busy already, you can move up to a 40 GB GPU.
 
Example:
<syntaxhighlight>
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod
spec:
# the standard node selector is insufficient here.
# needs to use the more expressive "nodeAffinity".
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: gpumem
operator: Gt
value: 31
# note: this also works to specify a minimum compute capability
- matchExpressions:
- key: nvidia-compute-capability-sm
operator: Gt
value: 79
tolerations:
- key: "gpumem"
operator: "Exists"
effect: "NoSchedule"
# ... rest of the specs like before
</syntaxhighlight>
ccu
3
edits

Navigation menu