High Performance Computing

chadwick File Storage

When you logon to chadwick, you will start in your home directory. The absolute path to this directory is of the form /home/myusername (e.g. /home/caddison). In addition you have two other top-level directories from which you can work. These are /volatile/myusername (e.g. /volatile/caddison) and /scratch/myusername (e.g. /scratch/caddison).

Your home directory has a quota on its size, to enable backing up all of the user home directories. It is not intended to be used as the target directory for most of the jobs on the cluster. It is intended to hold source files, small data files and those important job files that are difficult to reproduce. Space in your home directory is limited (normally to 6 GB per user) and if you run out of quota, you may find it difficult to get anything done.

The command quota -s will give you information on your current quota and how much of that quota you have used e.g.

$ quota -s
Disk quotas for user smithic (uid 41269): 
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
                   878M   5372M   5860M            5535    145k    150k 

The first figure is the amount of space used (here 878 MB) and the second your quota (here 5372 MB).

Your /volatile directory should normally be the one from which jobs are submitted and to which output is written. This directory has no user quotas on it, but it does have a physical limit. Users are kindly asked to manage space on this system sensibly and to not allow old and irrelevant output files and the like to fill up space on it. The volatile space is not backed up, hence the name. Important files should be removed from your volatile area to someplace more secure as part of your normal workflow management.

Your /scratch directory is very similar to your volatile directory. It is part of a filesystem that supports parallel input/output (I/O), which is particularly useful for applications such as Fluent and for some application domains that make use of standard I/O systems such as netcdf or hdf5, both of which support parallel I/O. In addition, your /scratch directory might be a good target for job output files if these job output files are large (say several hundred megabytes or larger).