HPC/Applications/lammps/Package GPU: Difference between revisions
< HPC | Applications | lammps
Jump to navigation
Jump to search
(Created page with "== Package GPU == * Provides multi-threaded versions of most pair styles, all dihedral styles and a few fixes in LAMMPS; for the full list: *# In your browser, open http://lam...") |
m (→Usage) |
||
Line 7: | Line 7: | ||
=== Usage === | === Usage === | ||
# Use the command [http://lammps.sandia.gov/doc/package.html '''<code>package gpu</code>'''] near the beginning of your LAMMPS control script. Since all Carbon GPU nodes have just one GPU per node, the | # Use the command [http://lammps.sandia.gov/doc/package.html '''<code>package gpu ''mode first last split''</code>'''] near the beginning of your LAMMPS control script. Since all Carbon GPU nodes have just one GPU per node, the arguments ''first'' and ''last'' must always be zero; ''split'' is not restricted. | ||
# Do one of the following: <!-- In your LAMMPS control script, --> | # Do one of the following: <!-- In your LAMMPS control script, --> | ||
#:* Append '''/gpu''' to the style name (e.g. pair_style lj/cut/gpu). | #:* Append '''/gpu''' to the style name (e.g. pair_style lj/cut/gpu). |
Revision as of 23:05, November 5, 2012
Package GPU
- Provides multi-threaded versions of most pair styles, all dihedral styles and a few fixes in LAMMPS; for the full list:
- In your browser, open http://lammps.sandia.gov/doc/Section_commands.html#comm
- Search for the string /cuda.
- Supports one physical GPU per LAMMPS MPI process (CPU core).
- Multiple MPI processes (CPU cores) can share a single GPU, and in many cases it will be more efficient to run this way.
Usage
- Use the command
package gpu mode first last split
near the beginning of your LAMMPS control script. Since all Carbon GPU nodes have just one GPU per node, the arguments first and last must always be zero; split is not restricted. - Do one of the following:
- Append /gpu to the style name (e.g. pair_style lj/cut/gpu).
- Use the suffix gpu command.
- On the command line, use the -suffix gpu switch.
- In the job file or qsub command line, request a GPU
#PBS -l nodes=...:gpus=1
(referring to the number of GPUs per node). - Call the
lmp_openmpi-gpu
binary.
Input file examples
package gpu force 0 0 1.0 package gpu force 0 0 0.75 package gpu force/neigh 0 0 1.0 package gpu force/neigh 0 1 -1.0
… pair_style lj/charmm/coul/long/gpu 8.0 10.0
Job file example
#PBS -l nodes=...:gpus=1 … mpirun … lmp_openmpi-gpu -in infile