HPC/Submitting and Managing Jobs
Directories and Environment
First read: directory configuration.
Applications
We use the environment-modules package to manage user applications.
This is similar to places like NERSC or PNNL.
The basic CNM-specific user environment is configured automatically in /etc/profile.d/cnm.{sh,csh}
.
For now the only applications are the Development tools.
Submitting jobs to Moab/Torque
qsub [-A accountname] [options] jobfile
For details on options:
man qsub qsub --help
More details are at the Torque Manual, in particular the qsub man page.
The single main queue is batch
and need not be specified. All job routing decisions are handled by the scheduler. In particular, short jobs are accommodated by a daily reserved node and by backfill scheduling, i.e. "waving forward" while a big job waits for full resources to become available.
Debug queue
For testing job processing and your job environment, use qsub -q debug
or #PBS -q debug
.
The queue accepts jobs under the following conditions
nodes <= 2 ppn <= 4 walltime <= 1:00:00
Checking job status
Use the command qstat (from PBS) or showq (from Moab):
- qstat [-u $USER]
- showq [-u $USER]
- regular output
- qstat -a
- showq -n
- alternate format (showing job names)
Getting extra information
- qstat -n [-1] jobnum
- Show nodes where a job runs.
- qstat -f [jobnum] [-1]
- Full information such as submit arguments and run directories. The "-1" option disables wrapping for long output lines.
- checkjob [-v] jobnum
- Get extended jobs status information – useful to diagnose problems with "stuck" jobs.
Removing jobs
qdel jobnumber
Example job file
- sample job file for Infiniband interconnect (recommended):
#!/bin/bash ## Basics: Number of nodes, processors per node (ppn), and walltime (hhh:mm:ss) #PBS -l nodes=5:ppn=8 #PBS -l walltime=0:10:00 #PBS -N job_name #PBS -A account ## File names for stdout and stderr. If not set here, the defaults ## are <JOBNAME>.o<JOBNUM> and <JOBNAME>.e<JOBNUM> #PBS -o job.out #PBS -e job.err ## send mail at begin, end, abort, or never (b, e, a, n) #PBS -m ea # change into the directory where qsub will be executed cd $PBS_O_WORKDIR # count allocated cores NPROCS=`wc -l < $PBS_NODEFILE` # start MPI job over default interconnect mpirun -machinefile $PBS_NODEFILE -np $NPROCS \ programname
- If your program reads from files or takes options and/or arguments, use and adjust one of the following forms
mpirun -machinefile $PBS_NODEFILE -np $NPROCS \ programname < run.in
mpirun -machinefile $PBS_NODEFILE -np $NPROCS \ programname -options arguments < run.in
mpirun -machinefile $PBS_NODEFILE -np $NPROCS \ programname < run.in > run.out 2> run.err
- In this form, anything after
programname
is optional. If you use specific redirections for stdout or stderr as shown (>, 2>), the job-global filesjob.out, job.err
declared earlier will remain empty or only contain output from your shell startup files (which should really be silent), and the rest of your job script.
- Infiniband (OpenIB) is the default (and fast) interconnect mechanism for MPI jobs. This is configured through the environment variable
$OMPI_MCA_btl
. - To select ethernet transport (e.g. for embarrasingly parallel jobs), specify an
-mca
option:
mpirun -machinefile $PBS_NODEFILE -np $NPROCS \ -mca btl self,tcp \ programname
The account parameter
The parameter for option -A account
is in most cases the CNM proposal, specified as follows:
cnm123
- (3 digits) for proposals below 1000
cnm01234
- (5 digits, 0-padded) for proposals from 1000 onwards.
user
- (the actual string "user", not your user name) for a limited personal startup allocation
staff
- for discretionary access by staff.
You can check your account balance in hours as follows:
mybalance -h gbalance -u $USER -h
PPN Tricks
PBS will generate a $PBS_NODEFILE
containing the name of each allocated node exactly ppn times, e.g. for a job with
#PBS -l nodes=2:ppn=4
PBS will produce a nodefile like this:
n011 n011 n011 n011 n037 n037 n037 n037
Currently, each Carbon node has 8 cores, and for many jobs users request ppn=8
.
However, you may need to request fewer cores, e.g. for the following reasons:
- your application is not parallelized
- your application has limited hardcoded parallelization, e.g. for 2 or 4 cores only (Carbon nodes have 8 cores)
- your application runs multi-threaded but uses
$PBS_NODEFILE
to infer the number of processes to start - your application runs busy service processes or service threads (e.g. NWChem)
- your application saturates a resource, e.g. memory bandwidth (some large VASP calculations)
- the node's memory is not sufficient to run processes on all cores.
Select a remedy from the following scenarios.
- General node sharing
- When a job requires only a few cores and a commensurate fraction of other resources, simply specify
ppn
as needed:
#PBS -l nodes=nnn:ppn=4
- In this case, the remaining cores may be allocated to other jobs, which is the default policy:
#PBS -l naccesspolicy=SHARED
- User-specific node sharing
- To permit node sharing among your own jobs, specify a SINGLEUSER policy:
#PBS -l nodes=nnn:ppn=2 #PBS -l naccesspolicy=SINGLEUSER
- Exclusive node access
- When your job requires only a few cores but a disproportionate fraction of another resource on a node (such as most of its memory or a lot of I/O bandwidth), claim the entire node:
#PBS -l nodes=nnn:ppn=4 #PBS -l naccesspolicy=SINGLEJOB
- PBS will reserve the entire node(s), but place each node name only
ppn
times in the$PBS_NODEFILE
.
Different PPN by node
- When your first MPI process (the "master" process) requires more memory than your other "worker" processes, give several
nodes
specifications, separated by a"+"
character (which is unusal and born of historical necessity):
#PBS -l nodes=1:ppn=1+2:ppn=4 #PBS -l naccesspolicy=SINGLEJOB
- For clarity, the
nodes
specification in this example reads as follows:
nodes = ( 1:ppn=1 ) + ( 2:ppn=4 )
- This will request 3 node exclusively, but the first node will occur only once in the
$PBS_NODEFILE
, e.g.
n011 n012 n012 n012 n012 n034 n034 n034 n034
In all of the preceding scenarios the following applies:
- The
$PBS_NODEFILE
seen by the job script will always matchppn
. - For accounting, the job will be billed by the number of cores blocked from use by other users, i.e.,
ncores=ppn
for shared nodes, andncores=8
otherwise.
Multithreading (OpenMP)
When you wish to use multithreading, you must ensure that the total number of "busy" user threads and processes corresponds to the number of cores requested from PBS. Today, multithreading in applications and libraries is typically programmed using the OpenMP interface and the number of threads is controlled by the environment variable $OMP_NUM_THREADS
.
Select from the following scenarios.
- OpenMP, single entire node
#PBS -l nodes=1:ppn=8 ... cd $PBS_O_WORKDIR export OMP_NUM_THREADS=8 ...
- OpenMP, single node shared
#PBS -l nodes=1:ppn=4 ... cd $PBS_O_WORKDIR export OMP_NUM_THREADS=`uniq -c $PBS_NODEFILE | awk '{print $1; exit}'` ...
- Here, the default policy "SHARED" is in effect, and OMP_NUM_THREADS is set automatically by counting the number of times that the first node occurs in
$PBS_NODEFILE
. This will allow you to vary or override the nodes setting using "qsub -l nodes=…" without having to edit it twice in the job file. - OpenMP/MPI hybrid
- Making efficient use of multithreading on multiple nodes which communicate over MPI is fairly involved and is subject to ongoing research. PBS requests are the same as in the preceding case (OpenMP only), but you must export
OMP_NUM_THREADS
to all MPI satellite nodes. Further, since$PBS_NODEFILE
is typically used to identify remote nodes, you must thin out the file before using it asmachinefile
:
#!/bin/bash #PBS -l nodes=nnn:ppn=8 ... MACHINEFILE=$PBS_NODEFILE ... if [ multithreaded ] # insert specific condition then MACHINEFILE=machinefile sort -u $PBS_NODEFILE > $MACHINEFILE fi ... mpirun -x OMP_NUM_THREADS=`uniq -c $PBS_NODEFILE | awk '{print $1; exit}'` \ -machinefile $MACHINEFILE \ -np `wc -l < $MACHINEFILE` \ …
The -x
option is specific to OpenMPI; please consult the documentation to achieve the same behavior in other MPI implementations.
Interactive node access
- You can use
ssh
to interactively access any compute node on which a job of yours is running. As soon as a node no longer runs at least one of your jobs, your ssh session to that node will be terminated.