HPC/Directories

From CNM Wiki
< HPC
Revision as of 21:34, October 26, 2017 by Stern (talk | contribs) (→‎Overview)
Jump to navigation Jump to search

Overview

Here is a summary of key directories related to Carbon, and environment variables used to access them:

Environment variable Typical value Shared across nodes Purge Schedule Notes
$HOME or ~ (tilde) /home/joe yes 6 weeks after your last active proposal expires your main data
$SANDBOX /sandbox/joe yes 4 weeks scratch space (short-term storage), not backed up
$TMPDIR /tmp/12345.mds01.... no At end of job job-specfic scratch space
$PBS_O_WORKDIR (yes) the directory qsub was run in; typically used with cd $PBS_O_WORKDIR as first line in a job script

Details by function

Home directory

$HOME
~  (tilde character)

Your home directory can be referred to in standard Unix fashion, as shown above, by either the environment variable or the tilde sign in most shells (but generally not application programs, especially not those written in Fortran).

  • Files are backed up nightly.
  • Your total file volume in $HOME is subject to (soft) quota of generally 0.5 TB.
  • You may exceed the soft limit by about 10% during a grace period of one week. You will see an over-quota notice upon login.
    If your usage remains above the soft limit beyond the grace time, the file system will appear (to you) as being full. To recover, delete files.

Keep in mind CNM's Data Retention Policy, which specifies that all your files may be deleted from our servers 30 days after your last active proposal has expired. At that time, your access to Carbon and its SSH gateway will be revoked.

Global scratch space

$SANDBOX

This environment variable points to a user-specific directory, shared across nodes like the home-directory.

Use this directory for short-lived files that need to be shared among multiple nodes, can get large, numerous, or change often. To accommodate this, usage policies are stricter than for /home:

  • Files are not backed up.
  • Hard quota are 3 TB in volume and 2 million in file count.
  • Soft quota are 10 GB and 10,000 files.
  • The grace period for overflowing a soft limit is 3 weeks.
  • Files will be deleted automatically once they are older than 4 weeks.

These limits (subject to change) are aimed at keeping space available for the intended use by files of unusual size (F.O.U.S.) or, conversely, small files of unusual count.

Local scratch space

$TMPDIR

This variable and the directory it refers to is provided by the queueing system for all processes that execute a job. The directory:

  • resides on local disk on each node,
  • is named the same on each node,
  • is not shared across nodes,
  • is shared for processes on the same node (as many as given in "ppn=…"), in other words, the name is PBS job-specific, but not Unix PID-specific,
  • typically provides about 100 GB of space,
  • will be wiped upon job exit on each node.

The environment variable TMPDIR is not shared across nodes either. Either communicate it internal to your program, or have it exported by mpirun/mpiexec:

OpenMPI
mpirun … \
       -x TMPDIR \
       [-x OTHERVAR] \
       …
Intel MPI
mpiexec.hydra … \
       -genvlist TMPDIR[,OTHERVAR] \
       …

Use in job files

You can use $TMPDIR in one or more of the following ways:

  • direct your application to store its temporary files there, which is typically done by command line switches or an environment variable such as:
export FOO_SCRATCH=$TMPDIR
  • actually run your application there:
cd $TMPDIR
In this case, make sure you either copy your input files there or you specify full paths to $HOME or $PPBS_O_WORKDIR.
  • copy files back upon job termination:
#PBS -W stageout=$TMPDIR/foo.ext@localhost:$PBS_O_WORKDIR
#PBS -W stageout=$TMPDIR/*.bar@localhost:$PBS_O_WORKDIR
You may specify several of these lines and use wildcards in to specify source files on the compute nodes. In contrast to explicit trailing "cp" commands in the job script, this copy will be executed even if a job overruns its walltime. See the qsub manual for further information.



(*) Lustre is a parallel file system that allows concurrent and coherent file access at high data rates.