Changes

Jump to navigation Jump to search
Persistent volumes
== Persistent volumes Local storage on the node ==
A persistent volume in Kubernetes is a cluster resource which can be requested by a container. For this, you have to claim a persistent volume (PV) using a persistent volume claim (PVC), which you apply in your namespace. The persistent volume claim can then be mounted to directories within a container. The important point path for local storage for each user is that the PVC survives the end of the container, i.e. the data in the PV will be permanent until the PVC is released. If the PVC is mounted again to a new container, the data will still be present. A persistent volume which is bound to a claim can not be assigned to any other claim. '''If the PVC is released, the PV is also released and immediately and automatically wiped clean of all data'''. If you want to keep your data, copy it to some other permanent storage first.
On the cluster, there are two types of persistent volumes currently configured:* Local persistent volumes* Global persistent volumes/raid/local-data/<your-username>
Note: You can mount it as a hostPath, but have to make sure that the cluster will soon get large, fast global storage, at this point local persistent volumes will be phased out and probably directory is created if it does not available anymore. Tensorboard monitoring should be done using service exportsexist, as explained below, and not make use of local PVsby specifying "type: DirectoryOrCreate".
The data will remain persistent on the host, but note that it also only exists on this particular host. If you need to access it again, you need to make sure the pod always ends up on the same specific node. See example below. Otherwise, write your scripts in such a way that they check for existence of the local data, and if it is not there yet, copy it over from somewhere on the internet.
=== Local persistent volumes ===
These are persistent volumes which are mapped to special folders of the host filesystem of the node. Each node exposes several persistent volumes which can be claimed. The user can not control exactly which volume is bound to a claim, but can request a minimum size. A persistent volume claim for a local PV is configured like this. Code examples can be found in the subdirectory "kubernetes/example_2" of the tutorial sample code, [[File:Kubernetes_samples.zip|Kubernetes samples]].
'''WARNING: Once a local persistent volume has been bound to a specific node, all pods which make use this volume are forced to also run on this node. This means you have to rely on resources (e.g. GPUs) being available on exactly that particular node.'''== Example ==
'''NOTE: The storage class "local-ssd" which was previously used for local persistent volumes is now obsolete, since a better driver with automatic provisioning has been installed. From now following example creates an access pod on, please use the compute node "local-pathtiamat" instead, which will give you a PV on mounts the fastest local device (usually SSD/NVMe RAID). No new volumes of class "local-ssd" can be claimed.''' Please copy over storage as well as all your data from old PVCs if you have the opportunity, or delete old PVCs not personal directories in use anymore. As soon as there are no more PVCs of the old class in use, it will be deleted from the cluster. Also, check out "global-datasets" below, which gives you a new opportunity to store large, static datasets on a very fast device.ceph file system:
 <syntaxhighlight lang="yamlbash">
apiVersion: v1
kind: PersistentVolumeClaimPod
metadata:
# the name of the PVC, we refer to this in the container configuration name: tfstorage-access-mnistpod-pvctiamat
spec:
resourcesnodeSelector: requests: # storage resource request. This PVC can only be bound to volumes which # have at least 8 GiB of storage available. storage: 8Gi  # the requested storage class, see tutorialkubernetes. storageClassName: local-path  # leave these unchanged, they must match the PV type, otherwise binding will fail accessModes: - ReadWriteOnce volumeMode: Filesystem</syntaxhighlight> The following storage classes are configured in the cluster:    When the claim is defined to your satisfaction, apply it like this: <syntaxhighlight lang="yaml">> kubectl apply -f pvc.yaml<io/syntaxhighlight> You can check on the status of this (and every other) claimhostname:tiamat
<syntaxhighlight lang= containers: - name: ubuntu image: ubuntu:20.04 command: ["sleep", "1d"yaml] resources: requests: cpu: 100m memory: 100Mi limits: cpu: 1 memory: 1Gi volumeMounts: - mountPath: /abyss/home name: cephfs-home readOnly: false - mountPath: /abyss/shared name: cephfs-shared readOnly: false - mountPath: /abyss/datasets name: cephfs-datasets readOnly: true - mountPath: /local name: local-storage readOnly: false volumes: - name: cephfs-home hostPath: path: "/cephfs/abyss/home/<your-username>"> kubectl get pvc type: Directory - name: cephfs-shared hostPath: path: "/cephfs/abyss/shared"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE type: Directorytf - name: cephfs-mnistdatasets hostPath: path: "/cephfs/abyss/datasets" type: Directory -pvc Pending name: local-storage hostPath: path 11s: "/raid/local-data/<your-username>" type: DirectoryOrCreate
</syntaxhighlight>
 
Since the claim has not been used by a container yet, it is not yet bound to a persitent volume (PV).
 
=== Global persistent volumes ===
 
Currently, there is no global pool for persistent volumes, as this has been replaced by CephFS hostPaths.
== Reading/writing the contents of a persistent volume ==

Navigation menu