HPC/Applications/lammps/Package GPU: Difference between revisions

From CNM Wiki
< HPC‎ | Applications‎ | lammps
Jump to navigation Jump to search
(Created page with "== Package GPU == * Provides multi-threaded versions of most pair styles, all dihedral styles and a few fixes in LAMMPS; for the full list: *# In your browser, open http://lam...")
 
mNo edit summary
 
(5 intermediate revisions by the same user not shown)
Line 7: Line 7:


=== Usage ===
=== Usage ===
# Use the command [http://lammps.sandia.gov/doc/package.html '''<code>package gpu</code>'''] near the beginning of your LAMMPS control script. Since all Carbon GPU nodes have just one GPU per node, the first two arguments (called ''first'' and ''last'') must always be zero; the ''split'' argument is not restricted.
# Use the <code>lmp_mpi'''-gpu'''</code> binary instead of the MPI-only one.
# Do one of the following: <!-- In your LAMMPS control script, -->
# Use a GPU potential style. Do one of the following:
#:* Append '''/gpu''' to the style name (e.g. pair_style lj/cut/gpu).
#* Append '''/gpu''' to the style name (e.g. pair_style lj/cut/gpu).
#:* Use the [http://lammps.sandia.gov/doc/suffix.html '''suffix gpu''' command].
#* Use the command [http://lammps.sandia.gov/doc/suffix.html <code>suffix gpu</code>].
#:* On the command line, use the [http://lammps.sandia.gov/doc/Section_start.html#start_7 '''-suffix gpu''' switch].
#* On the command line, use the [http://lammps.sandia.gov/doc/Section_start.html#command-line-options <code>-suffix gpu</code>].
# In the job file or qsub command line, [http://www.clusterresources.com/torquedocs21/2.1jobsubmission.shtml#resources request a GPU] <code>#PBS -l nodes=...:gpus=1</code> (referring to the number of GPUs per node).
# Detail the GPU request. Do one of the following:
# Call the <code>lmp_openmpi'''-gpu'''</code> binary.
#* Near the beginning of your LAMMPS control script, use the command [http://lammps.sandia.gov/doc/package.html <code>package gpu ''Ngpu''</code>] . Since all Carbon GPU nodes have just one GPU per node, the ''Ngpu'' argument must be 1.
 
#* Add [http://lammps.sandia.gov/doc/Section_start.html#command-line-options <code>-package gpu 1</code>] to the command line options. This is perhaps preferred because you do not need to alter your input file.
=== Input file examples ===
# In the job file or qsub command line, [http://www.clusterresources.com/torquedocs21/2.1jobsubmission.shtml#resources request a node ''having'' a GPU], using <code>#PBS -l nodes=...:gpus=1</code> (referring to the number of GPUs per node).
package gpu force 0 0 1.0
package gpu force 0 0 0.75
package gpu force/neigh 0 0 1.0
package gpu force/neigh 0 1 -1.0
 
pair_style      lj/charmm/coul/long'''/gpu''' 8.0 10.0


=== Job file example ===
=== Job file example ===
  #PBS -l nodes=...''':gpus=1'''
  #PBS -l nodes=...''':gpus=1'''
  …
  …
  mpirun … lmp_openmpi'''-gpu''' -in ''infile''
  mpirun … lmp_mpi'''-gpu''' '''-sf gpu -pk gpu 1''' -in ''infile''

Latest revision as of 18:33, March 23, 2018

Package GPU

  • Provides multi-threaded versions of most pair styles, all dihedral styles and a few fixes in LAMMPS; for the full list:
    1. In your browser, open http://lammps.sandia.gov/doc/Section_commands.html#comm
    2. Search for the string /cuda.
  • Supports one physical GPU per LAMMPS MPI process (CPU core).
  • Multiple MPI processes (CPU cores) can share a single GPU, and in many cases it will be more efficient to run this way.

Usage

  1. Use the lmp_mpi-gpu binary instead of the MPI-only one.
  2. Use a GPU potential style. Do one of the following:
    • Append /gpu to the style name (e.g. pair_style lj/cut/gpu).
    • Use the command suffix gpu.
    • On the command line, use the -suffix gpu.
  3. Detail the GPU request. Do one of the following:
    • Near the beginning of your LAMMPS control script, use the command package gpu Ngpu . Since all Carbon GPU nodes have just one GPU per node, the Ngpu argument must be 1.
    • Add -package gpu 1 to the command line options. This is perhaps preferred because you do not need to alter your input file.
  4. In the job file or qsub command line, request a node having a GPU, using #PBS -l nodes=...:gpus=1 (referring to the number of GPUs per node).

Job file example

#PBS -l nodes=...:gpus=1
…
mpirun … lmp_mpi-gpu -sf gpu -pk gpu 1 -in infile