On the cluster, there are two types of persistent volumes currently configured:
* Local persistent volumes
* Host directoriesGlobal persistent volumes
Local persistent volumes should be used to import training data and store results and log files of your training. There are special PVs for monitoring the training using Tensorboard. Host directories are meant for common training data sets stored permanently on the host. They are always read only.
These are persistent volumes which are mapped to special folders of the host filesystem of the node. Each node exposes several persistent volumes which can be claimed. The user can not control exactly which volume is bound to a claim, but can request a minimum size. A persistent volume claim for a local PV is configured like this. Code examples can be found in the subdirectory "kubernetes/example_2" of the tutorial sample code, [[File:Kubernetes_samples.zip|Kubernetes samples]].
WARNING: Once a persistent volume has been bound to a specific node, all pods which make use this volume are forced to also run on this node. This means you have to rely on resources (e.g. GPUs) being available on exactly that particular node.
<syntaxhighlight lang="yaml">
Since the claim has not been used by a container yet, it is not yet bound to a persitent volume (PV).
=== Host directories ===
Large training data sets which === Global persistent volumes === In contrast, global persistent volumes are required by many different users provided cluster-wide and are stored permanently in accessible from any node (managed internally with rook-ceph). They reside on SSDs and thus should be reasonably fast, however, depending on where the volume ends up, data will probably be transferred across the network to/from the filesystem of several node. Thus, they are slower than local-ssd, but leave you considerably more flexible, as they do not require pods to run on specific nodes. They can Compared to creating local persistent volumes, the only thing which needs to be claimed with a PVC as follows:changed is the storage class to "ceph-ssd".
<syntaxhighlight lang="yaml">
apiVersion: v1kind: PersistentVolumeClaimmetadata: # the name of the PVC, we refer to this in the container configuration name: tf-mnist-global-pvc spec: resources: requests: # storage resource request.This PVC can only be bound to volumes which # have at least 8 GiB of storage available. storage: 8Gi # the requested storage class, see tutorial. storageClassName: ceph-ssd
</syntaxhighlight>