Changes

Jump to navigation Jump to search
m
The CephFS file system
As explained in the [[CCU:GPU Cluster Quick Start|quick start tutorial]], every user can mount certain local host paths inside their pods, which refer to a global distributed Ceph file system. Reminder, the primary home directory is
- <syntaxhighlight lang="bash">/cephfs/abyss/homeshome/<your-username></syntaxhighlight>
This file system is usually quite fast, but only if it is used for workloads it is designed for. It is a distributed storage, where the filesystem metadata is stored in databases on different servers, and the actual content of the files on other ones. This means that metadata access (such as reading file attributes, or on which server to look for a specific file) can be a bottleneck. In effect, the task of reading the metadata for a small file is orders of magnitude more expensive than reading the actual contents of the file itself. This means that performance breaks down dramatically if writing or accessing many small files. In particular, having many small files in a single directory (say >10k) makes any simple filesystem operations such as directory listings take ages, and in particular automated backup jobs might run into problems.
If this is not possible for you, then you need to use the local SSD storage on a single node, which for small files is orders of magnitude faster, but you are bound to a particular node (or have to duplicate the data in different local filesystems). See below for details on local filesystems.
 
== CephFS capacity and backup strategy ==

Navigation menu