HPC/Directories: Difference between revisions

From CNM Wiki
< HPC
Jump to navigation Jump to search
Line 27: Line 27:


The users' home directories are kept on a Lustre* file system and are backed up nightly. The home directory can be reached in standard Unix fashion using either the environment variable or the tilde sign in most shells (but generally not application programs, especially not those written in Fortran).
The users' home directories are kept on a Lustre* file system and are backed up nightly. The home directory can be reached in standard Unix fashion using either the environment variable or the tilde sign in most shells (but generally not application programs, especially not those written in Fortran).
Your total file volume in $HOME is subject to a limit called [http://en.wikipedia.org/wiki/Disk_quota quota], set at 0.5 TB for most users.
This is a soft limit which can be exceeded by about 10% for a grace period of one week.


=== Sandbox - global scratch and overflow ===
=== Sandbox - global scratch and overflow ===

Revision as of 21:01, October 25, 2013

Overview

Here is a summary of key directories related to Carbon, and environment variables used to access them:

Environment variable Typical value Shared across nodes Notes
$HOME or ~ (tilde) /home/joe yes home sweet home
$SANDBOX /sandbox/joe yes extra storage, not backed up
$TMPDIR /tmp/12345.mds01.... no job-specfic scratch
$PBS_O_WORKDIR (yes) the directory qsub was run in; typically used with cd $PBS_O_WORKDIR as first line in a job script

Details by function

Home directory

$HOME
~

The users' home directories are kept on a Lustre* file system and are backed up nightly. The home directory can be reached in standard Unix fashion using either the environment variable or the tilde sign in most shells (but generally not application programs, especially not those written in Fortran).

Your total file volume in $HOME is subject to a limit called quota, set at 0.5 TB for most users. This is a soft limit which can be exceeded by about 10% for a grace period of one week.

Sandbox - global scratch and overflow

$SANDBOX

For files that need to be shared among the nodes, and are possibly large and change often, use a "sandbox" directory. The environment variable points to a user-specific directory which is shared by Lustre*, but not backed up.

To protect against accidental overflows, your use of $SANDBOX is subject to quota as well, generally 3 TB soft and 4 TB hard. The soft limit can be exceeded up to the hard limit for a grace period of one week.

Local scratch space

$TMPDIR

This variable and the directory it refers to is provided by the queueing system for all processes that execute a job. The directory:

  • resides on local disk on each node,
  • is named the same on each node,
  • is not shared across nodes,
  • is shared for processes on the same node (as many as given in "ppn=…"), in other words, the name is PBS job-specific, but not Unix PID-specific,
  • typically provides about 100 GB of space,
  • will be wiped upon job exit on each node.

The environment variable TMPDIR is not shared across nodes either. Either communicate it internal to your program, or have it exported by mpirun/mpiexec:

OpenMPI
mpirun … \
       -x TMPDIR \
       [-x OTHERVAR] \
       …
Intel MPI
mpiexec.hydra … \
       -genvlist TMPDIR[,OTHERVAR] \
       …

Use in job files

You can use $TMPDIR in one or more of the following ways:

  • direct your application to store its temporary files there, which is typically done by command line switches or an environment variable such as:
export FOO_SCRATCH=$TMPDIR
  • actually run your application there:
cd $TMPDIR
In this case, make sure you either copy your input files there or you specify full paths to $HOME or $PPBS_O_WORKDIR.
  • copy files back upon job termination:
#PBS -W stageout=$TMPDIR/foo.ext@localhost:$PBS_O_WORKDIR
#PBS -W stageout=$TMPDIR/*.bar@localhost:$PBS_O_WORKDIR
You may specify several of these lines and use wildcards in to specify source files on the compute nodes. In contrast to explicit trailing "cp" commands in the job script, this copy will be executed even if a job overruns its walltime. See the qsub manual for further information.



(*) Lustre is a parallel file system that allows concurrent and coherent file access at high data rates.