</syntaxhighlight>
You now have a copy of your dataset on Lolth. What happens now is that every hour, the datasets are synced to the directory "/raid/datasets/your.username" on an NFS server.This directory is exported and you can mount it into any container running on the cluster. Note that every user has read access to the whole directory tree, so you can use this method to share data between users as well.As a side effect, you now also have two backups of your data on two different machines (however in the same rack, so not really fire-proof).
What happens now is that every hour, the datasets are synced to the directory "/raid/datasets/your.username" on the cluster. You can check if your data is already there by creating a test container with the following configuration: <syntaxhighlight lang="yaml"></syntaxhighlight> Once it has been copied, you can mount it into any container running on the cluster. Note that every user has read access to the whole directory tree, so you can use this method to share data between users as well.Note that as a side effect, you now also have two backups of your data on two different machines (however in the same rack, so not really fire-proof). You can delete data from Lolth by ssh'ing into the machine and using rm to delete stuff in the "datasets/cluster" subdirectory. During the hourly sync, data not present here will also be deleted from the global cluster storage.
== Accessing the global storage from within a container ==