HPC/Directories: Difference between revisions
< HPC
Jump to navigation
Jump to search
mNo edit summary |
m (move over HOME and SANDBOX from HPC/Getting started) |
||
Line 1: | Line 1: | ||
== Overview == | |||
Here is a summary of key directories related to Carbon, and environment variables used to access them: | |||
{| class="wikitable" cellpadding="5" style="text-align:left; margin: 1em auto 1em auto;" | {| class="wikitable" cellpadding="5" style="text-align:left; margin: 1em auto 1em auto;" | ||
Line 8: | Line 9: | ||
! Notes | ! Notes | ||
|- | |- | ||
| $HOME | | <code>$HOME</code> or <code>~</code> (tilde) || /home/joe || yes || home sweet home | ||
|- | |- | ||
| $SANDBOX || /sandbox/joe || yes || extra storage, ''not backed up'' | | <code>$SANDBOX</code> || /sandbox/joe || yes || extra storage, ''not backed up'' | ||
|- | |- | ||
| $TMPDIR || /tmp/pbs_mom/12345.mds01.... || no || job-specfic scratch | | <code>$TMPDIR</code> || /tmp/pbs_mom/12345.mds01.... || no || job-specfic scratch | ||
|- | |- | ||
| $PBS_O_WORKDIR || || (yes) || the directory qsub was run in; typically used with <code>cd $PBS_O_WORKDIR </code> as first line in a job script | | <code>$PBS_O_WORKDIR</code> || || (yes) || the directory qsub was run in; typically used with <code>cd $PBS_O_WORKDIR </code> as first line in a job script | ||
|- | |- | ||
<!-- | $PBS_O_INITDIR || || (yes) || the directory a job starts to run in (normally $HOME) --> | <!-- | <code>$PBS_O_INITDIR</code> || || (yes) || the directory a job starts to run in (normally $HOME) --> | ||
|} | |} | ||
== Details by function == | |||
=== | === Home directory === | ||
$HOME | |||
~ | |||
* | The users' home directories are kept on a Lustre* file system and are backed up nightly. The home directory can be reached in standard Unix fashion using either the environment variable or the tilde sign in most shells (but generally not application programs, especially not those written in Fortran). | ||
=== Sandbox - global scratch and overflow === | |||
$SANDBOX | |||
For files that need to be shared among the nodes, and are possibly large and change often, use a "sandbox" directory. | |||
The environment variable points to a ''user-specific'' directory which is shared by Lustre*, but ''not backed up''. | |||
=== Local scratch space === | |||
$TMPDIR | |||
* resides on local disk on each node | * resides on local disk on each node | ||
* typically provides about 100 GB of space | * typically provides about 100 GB of space | ||
Line 29: | Line 42: | ||
* is not shared across nodes | * is not shared across nodes | ||
* will be wiped upon job exit | * will be wiped upon job exit | ||
<br> | |||
<hr> | |||
(*) Lustre is a parallel file system that allows concurrent and coherent file access at high data rates. |
Revision as of 19:37, November 9, 2009
Overview
Here is a summary of key directories related to Carbon, and environment variables used to access them:
Environment variable | Typical value | Shared across nodes | Notes |
---|---|---|---|
$HOME or ~ (tilde) |
/home/joe | yes | home sweet home |
$SANDBOX |
/sandbox/joe | yes | extra storage, not backed up |
$TMPDIR |
/tmp/pbs_mom/12345.mds01.... | no | job-specfic scratch |
$PBS_O_WORKDIR |
(yes) | the directory qsub was run in; typically used with cd $PBS_O_WORKDIR as first line in a job script
|
Details by function
Home directory
$HOME ~
The users' home directories are kept on a Lustre* file system and are backed up nightly. The home directory can be reached in standard Unix fashion using either the environment variable or the tilde sign in most shells (but generally not application programs, especially not those written in Fortran).
Sandbox - global scratch and overflow
$SANDBOX
For files that need to be shared among the nodes, and are possibly large and change often, use a "sandbox" directory. The environment variable points to a user-specific directory which is shared by Lustre*, but not backed up.
Local scratch space
$TMPDIR
- resides on local disk on each node
- typically provides about 100 GB of space
- is the same for [MPI] processes on the SAME node (as many as given in "ppn=…")
- (i.e., PBS job-specific, not Unix PID-specific)
- is not shared across nodes
- will be wiped upon job exit
(*) Lustre is a parallel file system that allows concurrent and coherent file access at high data rates.