HPC/Applications/lammps/Package USER-CUDA: Difference between revisions

From CNM Wiki
< HPC‎ | Applications‎ | lammps
Jump to navigation Jump to search
Line 4: Line 4:


=== Usage ===
=== Usage ===
# Optional: Use the command [http://lammps.sandia.gov/doc/package.html '''<code>package cuda ''keyword value  keyword value …''</code>'''] near the beginning of your LAMMPS control script to finely control settings. This is optional since a LAMMPS binary with USER-CUDA always detects and uses a GPU by default.
# Optional: Use the command [http://lammps.sandia.gov/doc/package.html '''<code>package cuda ''keyword value  keyword value …''</code>'''] near the beginning of your LAMMPS control script to finely control settings. This is optional since a LAMMPS binary with USER-CUDA ''always'' detects and uses a GPU by default.
# Do one of the following:
# Do one of the following:
#:* Append '''/cuda''' to the style name (e.g. pair_style lj/cut/cuda)
#:* Append '''/cuda''' to the style name (e.g. pair_style lj/cut/cuda)

Revision as of 23:07, November 5, 2012

Package USER-CUDA

  • Provides GPU versions of several pair styles and for long-range Coulombics via the PPPM command.
  • Only supports a single CPU (core) with each GPU [That should mean multiple nodes are possible; feasibility and efficiency to be determined --stern ]

Usage

  1. Optional: Use the command package cuda keyword value keyword value … near the beginning of your LAMMPS control script to finely control settings. This is optional since a LAMMPS binary with USER-CUDA always detects and uses a GPU by default.
  2. Do one of the following:
  3. Optional: The kspace_style pppm/cuda command has to be requested explicitly. [I am not sure if that means that other k-space styles implicitly use the GPU --stern. ]
  4. In the job file or qsub command line, request a GPU #PBS -l nodes=...:gpus=1.
  5. Call the lmp_openmpi-user-cuda binary.

Input file example

Examples:

package cuda gpu/node/special 2 0 2
package cuda test 3948
…
kspace_style    pppm/cuda 1e-5

Job file example

  • Serial job:
#PBS -l nodes=1:ppn=1:gpus=1
…
lmp_openmpi-user-cuda -suffix cuda -in infile
  • Parallel job; note that ppn must still be 1 as only one LAMMPS process (core) per node can use the sole GPU.
#PBS -l nodes=3:ppn=1:gpus=1
…
mpirun -machinefile $PBS_NODEFILE -np $PBS_NP lmp_openmpi-user-cuda -suffix cuda -in infile