So if your tasks for example requires a GPU with *exactly* 32 GB, you have to
# make the pod tolerate the taint "gpumem-=32" (see table below).# make the pod require the node label "gpumem-=32". # make the pod tolerate the taint "gpumem-32" *and* "gpumem-40".# make the pod require the node label "gpumem-32" *or* "gpumem-40".
Example:
Example<syntaxhighlight>apiVersion:v1kind: Podmetadata: name: gpu-podspec: nodeSelector: gpumem=32 containers: - name: gpu-container image: nvcr.io/nvidia/tensorflow:20.09-tf2-py3 command: ["sleep", "1d"] resources: requests: cpu: 1 nvidia.com/gpu: 1 memory: 10Gi limits: cpu: 1 nvidia.com/gpu: 1 memory: 10Gi # more specs (volumes etc.)</syntaxhighlight>
# make the pod tolerate the taint "gpumem-32" *and* "gpumem-40".
# make the pod require the node label "gpumem-32" *or* "gpumem-40".
Example:
Example<syntaxhighlight>apiVersion:v1kind: Podmetadata: name: gpu-podspec: # the standard node selector is insufficient here. # needs to use the more expressive "nodeAffinity". affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: gpumem operator: In values: - 32 - 40 # ... rest of the specs like before</syntaxhighlight>
== List of compute nodes ==