HPC/Applications/lammps/Package OMP: Difference between revisions
< HPC | Applications | lammps
Jump to navigation
Jump to search
mNo edit summary |
|||
Line 27: | Line 27: | ||
** [[HPC/Submitting and Managing Jobs/Advanced node selection#Pure OpenMP, single entire node | pure OpenMP, single entire node]], or | ** [[HPC/Submitting and Managing Jobs/Advanced node selection#Pure OpenMP, single entire node | pure OpenMP, single entire node]], or | ||
** [[HPC/Submitting and Managing Jobs/Advanced node selection#Pure OpenMP, single node, possibly shared | pure OpenMP, single node, possibly shared]]. | ** [[HPC/Submitting and Managing Jobs/Advanced node selection#Pure OpenMP, single node, possibly shared | pure OpenMP, single node, possibly shared]]. | ||
* Multiple nodes - use | : And for the main command: | ||
lmp_openmpi '''-suffix omp''' -in ''infile'' | |||
* Multiple nodes - use: | |||
** [[HPC/Submitting and Managing Jobs/Advanced node selection#OpenMP/MPI hybrid]]. | ** [[HPC/Submitting and Managing Jobs/Advanced node selection#OpenMP/MPI hybrid]]. | ||
: And for the main command: | |||
mpirun … lmp_openmpi '''-suffix omp''' -in ''infile'' |
Revision as of 23:31, November 5, 2012
Package OMP
LAMMPS modules since 2012 are compiled with yes-user-omp
, permitting multi-threaded runs of selected pair styles, and in particular MPI/OpenMP hybrid parallel runs. To set up such runs, see HPC/Submitting and Managing Jobs/Advanced node selection.
Be careful how to allocate CPU cores on compute nodes.
- The number of MPI tasks running on a node is determined by options to mpirun.
- The number of threads that each MPI task runs with is usually determined by the environment variable
OMP_NUM_THREADS
, which is 1 by default on Carbon. - The LAMMPS
package omp
command has a mandatory argumentNthreads
to either overrideOMP_NUM_THREADS
, or use it when set to*
.
Usage
- Use the command
package omp Nthreads mode
near the beginning of your LAMMPS control script. - Do one of the following:
- Append /omp to the style name (e.g. pair_style lj/cut/omp)
- Use the suffix omp command.
- On the command line, use the -suffix omp switch.
- In the job file or qsub command line, reserve nodes and ppn suitable for OpenMP runs.
- Call the
lmp_openmpi
(regular) binary.
Input file example
Examples:
- Recommended:
package omp * force/neigh
- Not recommended:
package omp 4 force
Job file example
- Single node - choose from:
- And for the main command:
lmp_openmpi -suffix omp -in infile
- Multiple nodes - use:
- And for the main command:
mpirun … lmp_openmpi -suffix omp -in infile