HPC/Submitting and Managing Jobs/Advanced node selection: Difference between revisions

From CNM Wiki
Jump to navigation Jump to search
Line 117: Line 117:
export OMP_NUM_THREADS=$PBS_NUM_PPN
export OMP_NUM_THREADS=$PBS_NUM_PPN
...
...
''programname''
programname …
</syntaxhighlight>
</syntaxhighlight>


Line 128: Line 128:
export OMP_NUM_THREADS=$PBS_NUM_PPN
export OMP_NUM_THREADS=$PBS_NUM_PPN
...
...
''programname''
programname …
</syntaxhighlight>
</syntaxhighlight>
Here, the default policy "SHARED" is in effect, and OMP_NUM_THREADS is set automatically by counting the number of times that the first node occurs in <code>$PBS_NODEFILE</code>. This will allow you to vary or override the ''nodes'' setting using "qsub -l nodes=…" without having to edit it twice in the job file.
Here, the default policy "SHARED" is in effect, and OMP_NUM_THREADS is set automatically by counting the number of times that the first node occurs in <code>$PBS_NODEFILE</code>. This will allow you to vary or override the ''nodes'' setting using "qsub -l nodes=…" without having to edit it twice in the job file.
Line 144: Line 144:


mpirun -x OMP_NUM_THREADS -machinefile  $PBS_NODEFILE -np $PBS_NP \
mpirun -x OMP_NUM_THREADS -machinefile  $PBS_NODEFILE -np $PBS_NP \
     ''programname''
     programname …
</syntaxhighlight>
</syntaxhighlight>
The <code>-x</code> option is specific to OpenMPI; please consult the documentation to achieve the same behavior in other MPI implementations.
The <code>-x</code> option is specific to OpenMPI; please consult the documentation to achieve the same behavior in other MPI implementations.

Revision as of 23:23, November 5, 2012

Node Types

Hardware

Carbon has two major node types, called gen1 and gen2, and gen2 is further differentiated by the amount of memory.


Node
names, types
Node
generation
Node
extra
properties
Node
count
Cores
per node
(max. ppn)
Cores total,
by type
Account
charge
rate
CPU
model
CPUs
per node
CPU
nominal
clock
(GHz)
Mem.
per node
(GB)
Mem.
per core
(GB)
GPU
model
GPU
per node
VRAM
per GPU
(GB)
Disk
per node
(GB)
Year
added
Note
Login
login5…6 gen7a gpus=2 2 16 32 3.0 Xeon Silver 4125 2 2.50 192 12 Tesla V100 2 32 250 2019
Compute
n421…460 gen5 40 16 640 2.0 Xeon E5-2650 v4 2 2.10 128 8 250 2017
n461…476 gen6 16 16 256 2.0 Xeon Silver 4110 2 2.10 96 6 1000 2018
n477…512 gen6 36 16 576 2.0 Xeon Silver 4110 2 2.10 192 12 1000 2018
n513…534 gen7 gpus=2 22 32 704 3.0 Xeon Gold 6226R 2 2.90 192 6 Tesla V100S 2 32 250 2020
n541…580 gen8 20 64 2560 2.1 Xeon Gold 6430 2 2.10 1024 16 420 2024
Total 134 4736 48


Benchmarks show that gen2 nodes are about twice as fast as gen1 nodes for memory-intensive applications. (The X5300 series is hampered by a memory bandwidth bottleneck when all 8 cores are active and frequently access memory.) Thus, gen1 nodes are charged at a discounted rate of the walltime actually used. The discount also encourages continued productive use of gen1 nodes.

Selecting node types for jobs

Jobs are directed automatically onto either gen1 or gen2 nodes, with preference for gen2 if both are available. Unless specifically requested, jobs will never mix generations. This will avoid disparate CPU speeds and MPI communication setup in a job. You can force jobs onto either node set in the job script after #PBS or on the qsub command line by suffixing the nodes= specifier with a property such as :gen1 or :gen2. For example, to run on 2 nodes with 8 cores each:

qsub -l nodes=2:ppn=8:gen1  foo.job	# not recommended for VASP
qsub -l nodes=2:ppn=8:gen2  foo.job

The following are (as of now) equivalent, since "bigmem" currently implies "gen2":

qsub -l nodes=2:ppn=8:gen2:bigmem  foo.job
qsub -l nodes=2:ppn=8:bigmem       foo.job

See also: http://www.clusterresources.com/torquedocs21/2.1jobsubmission.shtml#resources

PPN Tricks

Each Carbon node has 8 cores, and for many jobs users indeed request entire nodes by specififying ppn=8 in the job submission. However, you may need to request fewer cores, e.g. for the following reasons:

  • your application is not parallelized,
  • your application has limited hardcoded parallelization, e.g. for 2 or 4 cores only,
  • your application runs multi-threaded but uses $PBS_NODEFILE to infer the number of processes to start,
  • your application runs busy service processes or service threads (e.g. NWChem),
  • your application saturates a resource, e.g. memory bandwidth (some large VASP calculations),
  • the node's memory is exhausted by fewer application processeses than there are cores available.

Depending on the reason, the node either may be or must not be used by other jobs. In the past, the only way to achieve exclusive but undersubscribed node access was to request ppn=8 and then to thin out a copy of the nodefile before passing it to the application. To eliminate the need to edit the nodefile, use the -l naccesspolicy=… flag to differentiate between resources requested from Moab from those passed to the application (in $PBS_NODEFILE).

Select an option from the following scenarios.

Shared vs. Exclusive Node Access

Permit other users and jobs
When a job requires only a few cores and a commensurate fraction of other resources, simply specify ppn as needed:
#PBS -l nodes=nnn:ppn=4
In this case, the remaining cores may be allocated to other jobs, which is the default policy:
#PBS -l naccesspolicy=SHARED
Permit only your own jobs
#PBS -l nodes=nnn:ppn=2
#PBS -l naccesspolicy=SINGLEUSER
Permit only one job per node, no sharing
When your job requires only a few cores but a disproportionate fraction of another resource on a node (such as most of its memory or a lot of I/O bandwidth), claim the entire node:
#PBS -l nodes=nnn:ppn=4
#PBS -l naccesspolicy=SINGLEJOB
PBS will reserve the entire node(s), but place each node name only ppn times in the $PBS_NODEFILE. This is also useful for MPI+OpenMP ("hybrid") programming, see below.
Permit only one of your jobs, and permit other user's jobs
#PBS -l nodes=nnn:ppn=4
#PBS -l naccesspolicy=UNIQUEUSER
The node is shared, but limited to one job for any given user.

Different PPN by node

When your first MPI process (the "master" process) requires more memory than your other "worker" processes, give several nodes specifications, separated by a "+" character (which is unusal and born of historical necessity):
#PBS -l nodes=1:ppn=1+2:ppn=4
#PBS -l naccesspolicy=SINGLEJOB
For clarity, the nodes specification in this example reads as follows:
nodes = ( 1:ppn=1 ) + ( 2:ppn=4 )
This will request 3 node exclusively, but the first node will occur only once in the $PBS_NODEFILE, e.g.
n011
n012
n012
n012
n012
n034
n034
n034
n034

In all of the preceding scenarios the following applies:

  • The $PBS_NODEFILE seen by the job script will always match ppn.
  • For accounting, the job will be billed by the number of cores blocked from use by other users, i.e., ncores=ppn for shared nodes, and ncores=8 otherwise.

Multithreading (OpenMP)

When you wish to use multithreading, you must ensure that the total number of "busy" user threads and processes corresponds to the number of cores requested from PBS. Today, multithreading in applications and libraries is typically programmed using the OpenMP interface and the number of threads is controlled by the environment variable $OMP_NUM_THREADS. Select from the following scenarios.

Pure OpenMP, single entire node

#PBS -l nodes=1:ppn=8
#PBS -l naccesspolicy=SINGLEJOB

cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=$PBS_NUM_PPN
...
programname …

Pure OpenMP, single node, possibly shared

Choose the number of cores n such that 1 ≤ n ≤ 8:

#PBS -l nodes=1:ppn=n
...
cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=$PBS_NUM_PPN
...
programname …

Here, the default policy "SHARED" is in effect, and OMP_NUM_THREADS is set automatically by counting the number of times that the first node occurs in $PBS_NODEFILE. This will allow you to vary or override the nodes setting using "qsub -l nodes=…" without having to edit it twice in the job file.

OpenMP/MPI hybrid

Making efficient use of multithreading on multiple nodes which communicate over MPI is fairly involved and is subject to ongoing research. Since OMP_NUM_THREADS is set to 1 by default on MPI satellite nodes, you must export this variable after you altered it in the job file.

#!/bin/bash
#PBS -l nodes=nnn:ppn=2
#PBS -l naccesspolicy=SINGLEJOB

# Calculate number of threads available per MPI process
cores_per_node=$( grep -c ^processor /proc/cpuinfo )
OMP_NUM_THREADS=$(( cores_per_node / PBS_NUM_PPN ))

mpirun -x OMP_NUM_THREADS -machinefile  $PBS_NODEFILE -np $PBS_NP \
     programname …

The -x option is specific to OpenMPI; please consult the documentation to achieve the same behavior in other MPI implementations.

The last example will ensure:

  • you get allocated entire nodes (SINGLEJOB policy)
  • you do not oversubscribe cores (OMP_NUM_THREADS is calculated from ppn)
  • you only have one place to adjust (ppn), and can do so in the command line, or even post submission

It is assumed:

  • The number of cores on the first node (running the job script) is the same as on the other nodes.
  • All cores on a node will be used.

Advenced: PBS_* Variables

As of Torque-3.x the following environment variables are provided in a job environment, with their value being computed, from torque-3.0.5/src/resmom/start_exec.c :

static char *variables_else[] =   /* variables to add, value computed */
  {
  "HOME",
  "LOGNAME",
  "PBS_JOBNAME",
  "PBS_JOBID",
  "PBS_QUEUE",
  "SHELL",
  "USER",
  "PBS_JOBCOOKIE",
  "PBS_NODENUM",
  "PBS_TASKNUM",
  "PBS_MOMPORT",
  "PBS_NODEFILE",
  "PBS_NNODES",      /* number of nodes specified by size */
  "TMPDIR",
  "PBS_VERSION",
  "PBS_NUM_NODES",  /* number of nodes specified by nodes string */
  "PBS_NUM_PPN",    /* ppn value specified by nodes string */
  "PBS_GPUFILE",    /* file containing which GPUs to access */
  "PBS_NP",         /* number of processors requested */
  "PBS_WALLTIME",   /* requested or default walltime */
  NULL
  };