HPC/Directories: Difference between revisions
Line 46: | Line 46: | ||
The environment variable TMPDIR ''is not shared'' across nodes either. Either communicate it internal to your program, | The environment variable TMPDIR ''is not shared'' across nodes either. Either communicate it internal to your program, | ||
or [[HPC/Submitting and Managing Jobs/Example Job Script | have it exported by mpirun/mpiexec]]. | or [[HPC/Submitting and Managing Jobs/Example Job Script | have it exported by mpirun/mpiexec]]: | ||
; OpenMPI: | |||
mpirun … \ | |||
'''-x TMPDIR''' \ | |||
'''<nowiki>[-x OTHERVAR]</nowiki>''' \ | |||
… | |||
; Intel MPI: | |||
mpiexec.hydra … \ | |||
'''-genvlist TMPDIR'''<nowiki>[,OTHERVAR]</nowiki> \ | |||
… | |||
==== Use in job files ==== | ==== Use in job files ==== |
Revision as of 20:50, June 3, 2011
Overview
Here is a summary of key directories related to Carbon, and environment variables used to access them:
Environment variable | Typical value | Shared across nodes | Notes |
---|---|---|---|
$HOME or ~ (tilde) |
/home/joe | yes | home sweet home |
$SANDBOX |
/sandbox/joe | yes | extra storage, not backed up |
$TMPDIR |
/tmp/12345.mds01.... | no | job-specfic scratch |
$PBS_O_WORKDIR |
(yes) | the directory qsub was run in; typically used with cd $PBS_O_WORKDIR as first line in a job script
|
Details by function
Home directory
$HOME ~
The users' home directories are kept on a Lustre* file system and are backed up nightly. The home directory can be reached in standard Unix fashion using either the environment variable or the tilde sign in most shells (but generally not application programs, especially not those written in Fortran).
Sandbox - global scratch and overflow
$SANDBOX
For files that need to be shared among the nodes, and are possibly large and change often, use a "sandbox" directory. The environment variable points to a user-specific directory which is shared by Lustre*, but not backed up.
Local scratch space
$TMPDIR
This variable and the directory it refers to is provided by the queueing system for all processes that execute a job. The directory:
- resides on local disk on each node,
- is named the same on each node,
- is not shared across nodes,
- is shared for processes on the same node (as many as given in "ppn=…"), in other words, the name is PBS job-specific, but not Unix PID-specific,
- typically provides about 100 GB of space,
- will be wiped upon job exit on each node.
The environment variable TMPDIR is not shared across nodes either. Either communicate it internal to your program, or have it exported by mpirun/mpiexec:
- OpenMPI
mpirun … \ -x TMPDIR \ [-x OTHERVAR] \ …
- Intel MPI
mpiexec.hydra … \ -genvlist TMPDIR[,OTHERVAR] \ …
Use in job files
You can use $TMPDIR
in one or more of the following ways:
- direct your application to store its temporary files there, which is typically done by command line switches or an environment variable such as:
export FOO_SCRATCH=$TMPDIR
- actually run your application there:
cd $TMPDIR
- In this case, make sure you either copy your input files there or you specify full paths to
$HOME
or$PPBS_O_WORKDIR
.
- copy files back upon job termination:
#PBS -W stageout=$TMPDIR/foo.ext@localhost:$PBS_O_WORKDIR #PBS -W stageout=$TMPDIR/*.bar@localhost:$PBS_O_WORKDIR
- You may specify several of these lines and use wildcards in to specify source files on the compute nodes. In contrast to explicit trailing "cp" commands in the job script, this copy will be executed even if a job overruns its walltime. See the qsub manual for further information.
(*) Lustre is a parallel file system that allows concurrent and coherent file access at high data rates.