HPC/Applications/lammps/Package GPU: Difference between revisions
< HPC | Applications | lammps
Jump to navigation
Jump to search
m (Stern moved page HPC/Modules/lammps/Package GPU to HPC/Applications/lammps/Package GPU) |
|||
Line 23: | Line 23: | ||
… | … | ||
pair_style lj/charmm/coul/long'''/gpu''' 8.0 10.0 | pair_style lj/charmm/coul/long'''/gpu''' 8.0 10.0 | ||
Alternatively, use the following command line options: | |||
lmp_mpi-gpu '''''-sf gpu -pk gpu 1''''' -in ''file.in'' > ''file.out'' | |||
=== Job file example === | === Job file example === |
Revision as of 18:11, March 23, 2018
Package GPU
- Provides multi-threaded versions of most pair styles, all dihedral styles and a few fixes in LAMMPS; for the full list:
- In your browser, open http://lammps.sandia.gov/doc/Section_commands.html#comm
- Search for the string /cuda.
- Supports one physical GPU per LAMMPS MPI process (CPU core).
- Multiple MPI processes (CPU cores) can share a single GPU, and in many cases it will be more efficient to run this way.
Usage
- Use the command
package gpu mode first last split
near the beginning of your LAMMPS control script. Since all Carbon GPU nodes have just one GPU per node, the arguments first and last must always be zero; split is not restricted. - Do one of the following:
- Append /gpu to the style name (e.g. pair_style lj/cut/gpu).
- Use the suffix gpu command.
- On the command line, use the -suffix gpu switch.
- In the job file or qsub command line, request a GPU
#PBS -l nodes=...:gpus=1
(referring to the number of GPUs per node). - Call the
lmp_openmpi-gpu
binary.
Input file examples
package gpu force 0 0 1.0 package gpu force 0 0 0.75 package gpu force/neigh 0 0 1.0 package gpu force/neigh 0 1 -1.0
… pair_style lj/charmm/coul/long/gpu 8.0 10.0
Alternatively, use the following command line options:
lmp_mpi-gpu -sf gpu -pk gpu 1 -in file.in > file.out
Job file example
#PBS -l nodes=...:gpus=1 … mpirun … lmp_openmpi-gpu -in infile