HPC/Applications/lammps/Package OMP

From CNM Wiki
< HPC‎ | Applications‎ | lammps
Revision as of 15:21, June 25, 2013 by Stern (talk | contribs) (Stern moved page HPC/Modules/lammps/Package OMP to HPC/Applications/lammps/Package OMP)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Package OMP

LAMMPS modules since 2012 are compiled with yes-user-omp, permitting multi-threaded runs of selected pair styles, and in particular MPI/OpenMP hybrid parallel runs. To set up such runs, see HPC/Submitting and Managing Jobs/Advanced node selection.

  • The number of MPI tasks running on a node is determined by options to mpirun.
  • The number of threads that each MPI task runs with is usually determined by the environment variable OMP_NUM_THREADS, which is 1 by default on Carbon.
  • The LAMMPS package omp command has a mandatory argument Nthreads to either override OMP_NUM_THREADS, or use it when set to * - the recommended practice. It appears to be sensible to specify a concrete thread number only once, which means in the job file, and let LAMMPS inherit this value with Nthreads set to *.

Usage

  1. Use the command package omp Nthreads mode near the beginning of your LAMMPS control script.
  2. Do one of the following:
  3. In the job file or qsub command line, reserve nodes and ppn suitable for OpenMP runs.
  4. Call the lmp_openmpi (regular) binary.

Input file example

Examples:

# recommended - inherit OMP_NUM_THREADS
package omp * force/neigh
# not recommended
package omp 4 force

Job file example

And for the main command:
lmp_openmpi -suffix omp -in infile
And for the main command:
mpirun … lmp_openmpi -suffix omp -in infile