HPC/Submitting and Managing Jobs: Difference between revisions

From CNM Wiki
< HPC
Jump to navigation Jump to search
Line 165: Line 165:
* For accounting, the job will be ''billed'' by the number of cores blocked from use by other users, i.e., the actual ppn for shared nodes, and ppn=8 for naccesspolicy=SINGLEJOB.
* For accounting, the job will be ''billed'' by the number of cores blocked from use by other users, i.e., the actual ppn for shared nodes, and ppn=8 for naccesspolicy=SINGLEJOB.


=== Multithreading ===
== Multithreading (OpenMP) ==
When you wish to use multithreading, you must ensure that the total number of "busy" user threads and processes corresponds to the number of cores requested from PBS. Today, multithreading in applications and libraries is typically programmed using the [http://en.wikipedia.org/wiki/OpenMP OpenMP interface] and the number of threads is controlled by the environment variable <code>$OMP_NUM_THREADS</code>.
When you wish to use multithreading, you must ensure that the total number of "busy" user threads and processes corresponds to the number of cores requested from PBS. Today, multithreading in applications and libraries is typically programmed using the [http://en.wikipedia.org/wiki/OpenMP OpenMP interface] and the number of threads is controlled by the environment variable <code>$OMP_NUM_THREADS</code>.
<!--  There are two ways to achieve this:  -->
<!--  There are two ways to achieve this:  -->

Revision as of 02:38, November 6, 2010

Directories and Environment

First read: directory configuration.

Applications

We use the environment-modules package to manage user applications. This is similar to places like NERSC or PNNL. The basic CNM-specific user environment is configured automatically in /etc/profile.d/cnm.{sh,csh}.

For now the only applications are the Development tools.


Submitting jobs to Moab/Torque

 qsub [-A accountname] [options] jobfile

For details on options:

 man qsub
 qsub --help

More details are at the Torque Manual, in particular the qsub man page.

The single main queue is batch and need not be specified. All job routing decisions are handled by the scheduler. In particular, short jobs are accommodated by a daily reserved node and by backfill scheduling, i.e. "waving forward" while a big job waits for full resources to become available.

Debug queue

For testing job processing and your job environment, use qsub -q debug or #PBS -q debug. The queue accepts jobs under the following conditions

nodes <= 2
ppn <= 4
walltime <= 1:00:00

Querying jobs

Use the command qstat (from PBS) or showq (from Moab):

qstat [-u $USER]
showq [-u $USER]
regular output
qstat -a
showq -n
alternate format (showing names)
qstat -f [jobnum]
full information
checkjob [-v] jobnum
get extended jobs status information – useful to diagnose problems with "stuck" jobs.

Removing jobs

 qdel jobnumber

Example job file

  • sample job file for Infiniband interconnect (recommended):
#!/bin/bash

##  Basics: Number of nodes, processors per node (ppn), and walltime (hhh:mm:ss)
#PBS -l nodes=5:ppn=8
#PBS -l walltime=0:10:00
#PBS -N job_name
#PBS -A account

## File names for stdout and stderr.  If not set here, the defaults
## are <JOBNAME>.o<JOBNUM> and <JOBNAME>.e<JOBNUM>
#PBS -o job.out
#PBS -e job.err

## send mail at begin, end, abort, or never (b, e, a, n)
#PBS -m ea

# change into the directory where qsub will be executed
cd $PBS_O_WORKDIR

# count allocated cores
NPROCS=`wc -l < $PBS_NODEFILE`

# start MPI job over default interconnect
mpirun -machinefile $PBS_NODEFILE -np $NPROCS \
        programname
  • If your program reads from files or takes options and/or arguments, use and adjust one of the following forms
mpirun -machinefile $PBS_NODEFILE -np $NPROCS \
       programname  < run.in
mpirun -machinefile $PBS_NODEFILE -np $NPROCS \
       programname  -options arguments < run.in
mpirun -machinefile $PBS_NODEFILE -np $NPROCS \
       programname < run.in > run.out 2> run.err
In this form, anything after programname is optional. If you use specific redirections for stdout or stderr as shown (>, 2>), the job-global files job.out, job.err declared earlier will remain empty or only contain output from your shell startup files (which should really be silent), and the rest of your job script.
  • Infiniband (OpenIB) is the default (and fast) interconnect mechanism for MPI jobs. This is configured through the environment variable $OMPI_MCA_btl.
  • To select ethernet transport (e.g. for embarrasingly parallel jobs), specify an -mca option:
mpirun -machinefile $PBS_NODEFILE -np $NPROCS \
	-mca btl self,tcp \
        programname

The account parameter

The parameter for option -A account is in most cases the CNM proposal, specified as follows:

cnm123
(3 digits) for proposals below 1000
cnm01234
(5 digits, 0-padded) for proposals from 1000 onwards.
user
(the actual string "user", not your user name) for a limited personal startup allocation
staff
for discretionary access by staff.

You can check your account balance in hours as follows:

mybalance -h
gbalance -u $USER -h

PPN Tricks

PBS will generate a $PBS_NODEFILE containing the name of each allocated node exactly ppn times, e.g. for a job with

#PBS -l nodes=2:ppn=4

PBS will produce a nodefile like this:

n011
n011
n011
n011
n037
n037
n037
n037

Currently, each Carbon node has 8 cores, and most jobs will request ppn=8. However, you may need to request fewer cores, e.g. for the following reasons:

  • your application is not parallelized
  • your application has limited hardcoded parallelization, e.g. for 2 or 4 cores only (Carbon nodes have 8 cores)
  • your application runs multi-threaded but uses $PBS_NODEFILE to infer processes to start
  • your application runs busy service processes or service threads (e.g. NWChem)
  • your application saturates a resource, e.g. memory bandwidth (some large VASP calculations)
  • the node's memory is not sufficient to run processes on all cores.

The following sections show various scenarios.

Shared vs. Exclusive Node Access

Shared node access
When a job requires only a few cores and little other resources, simply specify ppn as needed:
#PBS -l nodes=nnn:ppn=4
In this case, the remaining cores may be allocated to other jobs, which is the default policy:
#PBS -l naccesspolicy=SHARED
User-specific node access
To share a node only among your own jobs, specify a SINGLEUSER policy:
#PBS -l nodes=nnn:ppn=2
#PBS -l naccesspolicy=SINGLEUSER
Exclusive node access
When your job requires only a few cores but all of a particular other resource on a node (such as most of its memory or a lot of I/O bandwidth), claim the entire node:
#PBS -l nodes=nnn:ppn=4
#PBS -l naccesspolicy=SINGLEJOB
PBS will reserve the entire node(s), but place each node name only ppn times in the $PBS_NODEFILE.

Different PPN by node

When your first MPI process (the "master" process) requires more memory than your other "worker" processes, give several nodes specifications, separated by then"+" character:
#PBS -l nodes=1:ppn=1+2:ppn=4
#PBS -l naccesspolicy=SINGLEJOB
This will request 3 node exclusively, but the first node will occur only once in the $PBS_NODEFILE, e.g.
n011
n012
n012
n012
n012
n034
n034
n034
n034

In all of the preceding scenarios the following applies:

  • The $PBS_NODEFILE seen by the job script will always match ppn.
  • For accounting, the job will be billed by the number of cores blocked from use by other users, i.e., the actual ppn for shared nodes, and ppn=8 for naccesspolicy=SINGLEJOB.

Multithreading (OpenMP)

When you wish to use multithreading, you must ensure that the total number of "busy" user threads and processes corresponds to the number of cores requested from PBS. Today, multithreading in applications and libraries is typically programmed using the OpenMP interface and the number of threads is controlled by the environment variable $OMP_NUM_THREADS.

OpenMP, single entire node
#PBS -l nodes=1:ppn=8
...
cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=8
...
OpenMP, single node shared
#PBS -l nodes=1:ppn=4
...
cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=`uniq -c $PBS_NODEFILE | awk '{print $1; exit}'`
...
Here, the default policy "SHARED" is in effect, and OMP_NUM_THREADS is set automatically by counting the number of times that the first node occurs in $PBS_NODEFILE. This will allow you to vary or override the nodes setting using "qsub -l nodes=…" without having to edit it twice in the job file.
OpenMP + MPI Hybrid
This is still a very new field and subject to ongoing research. Usage is the same as the preceding case (OpenMP only), plus you must export OMP_NUM_THREADS to all MPI satellite nodes:
...
mpiexec -x OMP_NUM_THREADS …    # OpenMPI-specific


Policies

  • Direct user access to nodes is only possible while a job is running for that user. This is governed by the torque-pam package.