HPC/Applications/lammps/Package OMP: Difference between revisions

From CNM Wiki
< HPC‎ | Applications‎ | lammps
Jump to navigation Jump to search
mNo edit summary
Line 18: Line 18:
=== Input file example ===
=== Input file example ===
Examples:
Examples:
* Recommended:
  package omp * force/neigh
  package omp * force/neigh
* ''Not'' recommended:
  package omp 4 force
  package omp 4 force


=== Job file example ===
=== Job file example ===
* Single node:
* Single node - choose from:
#PBS -l nodes=1:ppn=8
** [[HPC/Submitting and Managing Jobs/Advanced node selection#Pure OpenMP, single entire node | pure OpenMP, single entire node]], or
** [[HPC/Submitting and Managing Jobs/Advanced node selection#Pure OpenMP, single node, possibly shared | pure OpenMP, single node, possibly shared]].
* Multiple nodes - use
** [[HPC/Submitting and Managing Jobs/Advanced node selection#OpenMP/MPI hybrid]].
In either case, use for <code>''programname''</code>
  lmp_openmpi '''-suffix omp''' -in ''infile''
  lmp_openmpi '''-suffix omp''' -in ''infile''
* Multiple nodes:
# See [[HPC/Submitting and Managing Jobs/Advanced node selection#OpenMP/MPI hybrid]] for #PBS directives.
mpirun … \
        lmp_openmpi '''-suffix omp''' -in ''infile''

Revision as of 23:29, November 5, 2012

Package OMP

LAMMPS modules since 2012 are compiled with yes-user-omp, permitting multi-threaded runs of selected pair styles, and in particular MPI/OpenMP hybrid parallel runs. To set up such runs, see HPC/Submitting and Managing Jobs/Advanced node selection.

Be careful how to allocate CPU cores on compute nodes.

  • The number of MPI tasks running on a node is determined by options to mpirun.
  • The number of threads that each MPI task runs with is usually determined by the environment variable OMP_NUM_THREADS, which is 1 by default on Carbon.
  • The LAMMPS package omp command has a mandatory argument Nthreads to either override OMP_NUM_THREADS, or use it when set to *.

Usage

  1. Use the command package omp Nthreads mode near the beginning of your LAMMPS control script.
  2. Do one of the following:
  3. In the job file or qsub command line, reserve nodes and ppn suitable for OpenMP runs.
  4. Call the lmp_openmpi (regular) binary.

Input file example

Examples:

  • Recommended:
package omp * force/neigh
  • Not recommended:
package omp 4 force

Job file example

In either case, use for programname

lmp_openmpi -suffix omp -in infile