HPC/Applications/lammps/Package OMP: Difference between revisions

From CNM Wiki
< HPC‎ | Applications‎ | lammps
Jump to navigation Jump to search
 
(8 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Package OMP ==
== Package OMP ==
LAMMPS modules since 2012 are compiled with [http://lammps.sandia.gov/doc/Section_accelerate.html#acc_2 <code>yes-user-omp</code>], permitting multi-threaded runs of selected pair styles, and in particular MPI/OpenMP hybrid parallel runs. To set up such runs, see [[HPC/Submitting and Managing Jobs/Advanced node selection]].
LAMMPS modules since 2012 are compiled with [http://lammps.sandia.gov/doc/Section_accelerate.html#acc_2 <code>yes-user-omp</code>], permitting multi-threaded runs of selected pair styles, and in particular MPI/OpenMP hybrid parallel runs. To set up such runs, see [[HPC/Submitting and Managing Jobs/Advanced node selection]].
Be careful how to [[HPC/Submitting and Managing Jobs/Advanced node selection | allocate CPU cores on compute nodes]].
* The number of MPI tasks running on a node is determined by options to mpirun.
* The number of MPI tasks running on a node is determined by options to mpirun.
* The number of threads that each MPI task runs with is usually determined by the environment variable <code>OMP_NUM_THREADS</code>, which is 1 by default on Carbon.
* The number of threads that each MPI task runs with is usually determined by the environment variable <code>OMP_NUM_THREADS</code>, which is 1 by default on Carbon.
* The LAMMPS <code>package omp</code> command has a mandatory argument <code>''Nthreads''</code> to either ''override'' <code>OMP_NUM_THREADS</code>, or ''use'' it when set to <code>*</code>.
* The LAMMPS <code>package omp</code> command has a mandatory argument <code>''Nthreads''</code> to either ''override'' <code>OMP_NUM_THREADS</code>, or ''use'' it when set to <code>*</code> - the recommended practice. It appears to be sensible to specify a concrete thread number only once, which means in the job file, and let LAMMPS inherit this value with <code>''Nthreads''</code> set to <code>*</code>.


=== Usage ===
=== Usage ===
Line 13: Line 11:
#:* Use the [http://lammps.sandia.gov/doc/suffix.html '''suffix omp''' command].
#:* Use the [http://lammps.sandia.gov/doc/suffix.html '''suffix omp''' command].
#:* On the command line, use the [http://lammps.sandia.gov/doc/Section_start.html#start_7 '''-suffix omp''' switch].
#:* On the command line, use the [http://lammps.sandia.gov/doc/Section_start.html#start_7 '''-suffix omp''' switch].
# In the job file or qsub command line, [[HPC/Submitting and Managing Jobs/Advanced node selection|reserve ''entire'' nodes]].
# In the job file or qsub command line, [[HPC/Submitting and Managing Jobs/Advanced node selection#Multithreading (OpenMP) |reserve nodes and ppn]] suitable for OpenMP runs.
# Call the <code>lmp_openmpi</code> (regular) binary.
# Call the <code>lmp_openmpi</code> (regular) binary.


=== Input file example ===
=== Input file example ===
Examples:
Examples:
# recommended - inherit OMP_NUM_THREADS
  package omp * force/neigh
  package omp * force/neigh
# '''not''' recommended
  package omp 4 force
  package omp 4 force


=== Job file example ===
=== Job file example ===
* Single node:
* Single node - choose from:
#PBS -l nodes=1:ppn=8
** [[HPC/Submitting and Managing Jobs/Advanced node selection#Pure OpenMP, single entire node | pure OpenMP, single entire node]], or
** [[HPC/Submitting and Managing Jobs/Advanced node selection#Pure OpenMP, single node, possibly shared | pure OpenMP, single node, possibly shared]].
  lmp_openmpi -suffix omp -in ''infile''
: And for the main command:
* Multiple nodes:
  lmp_openmpi '''-suffix omp''' -in ''infile''
# See [[HPC/Submitting and Managing Jobs/Advanced node selection#OpenMP/MPI hybrid]] for #PBS directives.
* Multiple nodes - use:
** [[HPC/Submitting and Managing Jobs/Advanced node selection#OpenMP/MPI hybrid]].
  mpirun … \
: And for the main command:
        lmp_openmpi '''-suffix omp''' -in ''infile''
  mpirun … lmp_openmpi '''-suffix omp''' -in ''infile''

Latest revision as of 15:21, June 25, 2013

Package OMP

LAMMPS modules since 2012 are compiled with yes-user-omp, permitting multi-threaded runs of selected pair styles, and in particular MPI/OpenMP hybrid parallel runs. To set up such runs, see HPC/Submitting and Managing Jobs/Advanced node selection.

  • The number of MPI tasks running on a node is determined by options to mpirun.
  • The number of threads that each MPI task runs with is usually determined by the environment variable OMP_NUM_THREADS, which is 1 by default on Carbon.
  • The LAMMPS package omp command has a mandatory argument Nthreads to either override OMP_NUM_THREADS, or use it when set to * - the recommended practice. It appears to be sensible to specify a concrete thread number only once, which means in the job file, and let LAMMPS inherit this value with Nthreads set to *.

Usage

  1. Use the command package omp Nthreads mode near the beginning of your LAMMPS control script.
  2. Do one of the following:
  3. In the job file or qsub command line, reserve nodes and ppn suitable for OpenMP runs.
  4. Call the lmp_openmpi (regular) binary.

Input file example

Examples:

# recommended - inherit OMP_NUM_THREADS
package omp * force/neigh
# not recommended
package omp 4 force

Job file example

And for the main command:
lmp_openmpi -suffix omp -in infile
And for the main command:
mpirun … lmp_openmpi -suffix omp -in infile