HPC/Submitting and Managing Jobs: Difference between revisions

From CNM Wiki
< HPC
Jump to navigation Jump to search
m (→‎Example job file: modify template)
Line 70: Line 70:
#PBS -m ea
#PBS -m ea


# change directory
# change into the directory where qsub will be executed
cd $PBS_O_WORKDIR
cd $PBS_O_WORKDIR


Line 76: Line 76:
NPROCS=`wc -l < $PBS_NODEFILE`
NPROCS=`wc -l < $PBS_NODEFILE`


# Log verbose info
cat << END_INFO
    nodes file: $PBS_NODEFILE
    process list: `xargs < $PBS_NODEFILE`
    process count: $NPROCS
END_INFO


 
# start MPI job over fast interconnect (Infiniband)
# start MPI job over Infiniband transport
mpirun -machinefile $PBS_NODEFILE -np $NPROCS \
mpirun -machinefile $PBS_NODEFILE -np $NPROCS \
         programname
         programname args
</pre>
</pre>
OpenIB is the default (and fast) interconnect mechanism. This is done using the environment variable <code>$OMPI_MCA_btl<code>.
OpenIB is the default (and fast) interconnect mechanism. This is done using the environment variable <code>$OMPI_MCA_btl<code>.

Revision as of 16:32, October 29, 2009

Environment configuration

Home dir

The users' home directories are hosted on lustre and are backed up nightly. The home directory can be reached, in standard Unix fashion using either of the two following symbols:

~
$HOME

lustre sandbox

For files that need to be shared among the nodes, and are possibly large and change often, use a "sandbox" directory. The environment variable

$SANDBOX

points to such a user-specific directory, which is shared by lustre, but not backed up. lustre is a parallel file system that allows concurrent and coherent file access at high data rates.

Applications

In the final configuration, we will use the environment-modules package to manage user applications. This will be similar to places like NERSC or PNNL. In early access mode, the CNM-specific user environment is configured automatically in /etc/profile.d/cnm.{sh,csh}.

For now the only applications are the Development tools.

Admin note: The master copy of these files resides in mgmt{01,02}:/opt/teamhpc/node-skel/etc/profile.d and is distributed by ~root/bin/skeldistrib.

Submitting jobs to Moab/Torque

 qsub [-A accountname] [options] jobfile

For details on options:

 man qsub
 qsub --help     # sorry, not much)

We currently have only the default queue configured.

More details at the Torque Wiki, in particular the full qsub documentation for all supported PBS options.

Querying jobs

Use the command qstat (from PBS) or showq (from Moab):

qstat [-u $USER]
showq [-u $USER]
regular output
qstat -a
showq -n
alternate format (showing names)
qstat -f [jobnum]
full information
checkjob [-v] jobnum
get extended jobs status information – useful to diagnose problems with "stuck" jobs.

Removing jobs

 qdel jobnumber

Example job file

  • sample job file for Infiniband interconnect (recommended):
#!/bin/bash

##  Basics: Number of nodes, processors per node (ppn), and walltime (hhh:mm:ss)
#PBS -l nodes=5:ppn=8
#PBS -l walltime=0:10:00
#PBS -N job_name
#PBS -A account

## File names for stdout and stderr.  If not set here, the defaults
## are <JOBNAME>.o<JOBNUM> and <JOBNAME>.e<JOBNUM>
#PBS -o job.out
#PBS -e job.err

## send mail at begin, end, abort, or never (b, e, a, n)
#PBS -m ea

# change into the directory where qsub will be executed
cd $PBS_O_WORKDIR

# count allocated cores
NPROCS=`wc -l < $PBS_NODEFILE`


# start MPI job over fast interconnect (Infiniband)
mpirun -machinefile $PBS_NODEFILE -np $NPROCS \
        programname  args

OpenIB is the default (and fast) interconnect mechanism. This is done using the environment variable $OMPI_MCA_btl.

  • To select ethernet transport (e.g. for embarrasingly parallel jobs), specify an -mca option:
mpirun -machinefile $PBS_NODEFILE -np $NPROCS \
	-mca btl self,tcp \
        programname

The account parameter

The parameter for option -A account is in most cases the CNM proposal, specified as follows:

cnm123
(3 digits) for proposals below 1000
cnm01234
(5 digits, 0-padded) for proposals from 1000 onwards.
user
for a limited personal startup allocation
staff
for discretionary access by staff.

You can check your account balance in hours as follows:

mybalance -h
gbalance -u $USER -h

Using OpenMP

For hybrid MPI/OpenMP operation under PBS (which is what happens when linking the MKL with OpenMP), two adjustments are necessary:

  1. The environment variable OMP_NUM_THREADS needs to be set to the number of available cores per node, i.e., the ppn parameter. By default, this variable is set to 1 to select single-threading of OpenMP-compiled applications.
  2. The machinefile needs to be thinned out in the job file, to have each node listed only once.

Example

#!/bin/bash
#PBS -l nodes=nnn:ppn=8
...
MACHINEFILE=$PBS_NODEFILE
...
if [ multithreaded ]            # insert specific condition
then
    sort -u $MACHINEFILE > machinefile
    MACHINEFILE=machinefile
    export OMP_NUM_THREADS=8
fi
...
NPROC=`wc -l < $MACHINEFILE`
...

Hybrid MPI+OpenMP is currently unsupported and may well be less efficient than compiling and running with MPI-only communication.

Policies

  • Direct user access to nodes is only possible while a job is running for that user. This is governed by the torque-pam package.