HPC/Applications/namd: Difference between revisions

From CNM Wiki
Jump to navigation Jump to search
m (Stern moved page HPC/Software/Modules/namd to HPC/Modules/namd: Simplify hierarchy)
m (Stern moved page HPC/Modules/namd to HPC/Applications/namd)
 
(11 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Introduction ==
== MPI version ==
 
[http://www.ks.uiuc.edu/Research/namd/ NAMD] is a parallel, object-oriented molecular dynamics code designed
for high-performance simulation of large biomolecular systems.
NAMD is distributed free of charge and includes source code.
 
Subject to [http://www.ks.uiuc.edu/Research/namd/license.html Univ. of Illinois Non-Exclusive, Non-Commercial Use License].
 
The NAMD project is funded by the National Institutes of Health (grant number PHS 5 P41 RR05969).
 
== NAMD on Carbon ==
=== Required reading ===
[[HPC/Getting_started]]
=== Modules ===
=== Modules ===
* As of NAMD-2.9, the named module requires [[../fftw3|FFTW-3]]. You can load both modules together:
* As of NAMD-2.9, the namd module requires [[../fftw3|FFTW-3]]. You can load both modules together:
  module load fftw3 namd
  module load fftw3 namd


=== Special instructions ===
=== Using MPI ===
* NAMD uses its own MPI wrapper, called [http://charm.cs.uiuc.edu/ Charm++], but the default <code>mpirun</code> from OpenMPI works just fine.
* NAMD uses its own MPI wrapper, called [http://charm.cs.uiuc.edu/ Charm++], but the default <code>mpirun</code> from OpenMPI works just fine.
* Use the [[HPC/Submitting and Managing Jobs/Example Job Script#Generic job scripts|Generic job template]]
* Use the [[HPC/Submitting and Managing Jobs/Example Job Script#Generic job scripts|Generic job template]]
Line 22: Line 10:
#!/bin/bash
#!/bin/bash
cd $PBS_O_WORKDIR
cd $PBS_O_WORKDIR
mpirun -machinefile $PBS_NODEFILE -np $PBS_NP \
mpirun -machinefile $PBS_NODEFILE -np $PBS_NP \
         namd2 inputfile.namd
         namd2 file.namd
</syntaxhighlight>
</syntaxhighlight>
== Using GPUs and MPI ==
As of module version namd/2.9plus-MPI-icc-3 the module provides two binaries, a regular one called <code>namd2</code> (as usual) and a GPU-eneabled binary called <code>namd2-cuda</code>. The latter requires the cuda module as additional prerequisite:
module load fftw3
module load cuda
module load namd
<!--
; Update 2012-11-02: This section is in development. --[[User:Stern|stern]].
* http://www.ks.uiuc.edu/Research/namd/development.html – build instructions
* http://www.ks.uiuc.edu/Research/namd/2.7/ug/node72.html – running with CUDA (namd-2.7) -->
The relevant paragraphs from the [http://www.ks.uiuc.edu/Research/namd/2.9/ug/node88.html NAMD User Guide] are:
<blockquote>
Energy evaluation is slower than calculating forces alone, and the loss is much greater in CUDA-accelerated builds. Therefore you should set outputEnergies to 100 or higher in the simulation config file. Some features are unavailable in CUDA builds, including alchemical free energy perturbation and the Lowe-Andersen thermostat.
<br>
As this is a new feature you are encouraged to test all simulations before beginning production runs. Forces evaluated on the GPU differ slightly from a CPU-only calculation, an effect more visible in reported scalar pressure values than in energies.
<br.>… <br>
Each namd2 thread can use only one GPU. Therefore you will need to run at least one thread for each GPU you want to use. Multiple threads can share a single GPU, usually with an increase in performance. NAMD will automatically distribute threads equally among the GPUs on a node.
</blockquote>
To run on GPU nodes:
#PBS -l nodes=''N'':ppn=''PPN''''':gpus=1'''
mpirun -machinfile $PBS_NODEFILE -np $PBS_NP \
      namd2'''-cuda +idlepoll''' ''file.namd''
* Test and optimize the <code>''N''</code> and <code>''PPN''</code> parameters for your situation. Start with <code>nodes=1:ppn=4</code>, increase up to <code>ppn='''12'''</code>, and/or increase <code>nodes &gt; 1</code>. Speedups from a GPU can be expected to be of order 4–5 &times;.
* The <code>gpus=…</code> modifier refers to GPUs ''per node'' and currently must always be 1.


== Documentation ==
== Documentation ==

Latest revision as of 15:21, June 25, 2013

MPI version

Modules

  • As of NAMD-2.9, the namd module requires FFTW-3. You can load both modules together:
module load fftw3 namd

Using MPI

#!/bin/bashcd $PBS_O_WORKDIR
mpirun -machinefile $PBS_NODEFILE -np $PBS_NP \
        namd2 file.namd

Using GPUs and MPI

As of module version namd/2.9plus-MPI-icc-3 the module provides two binaries, a regular one called namd2 (as usual) and a GPU-eneabled binary called namd2-cuda. The latter requires the cuda module as additional prerequisite:

module load fftw3
module load cuda
module load namd

The relevant paragraphs from the NAMD User Guide are:

Energy evaluation is slower than calculating forces alone, and the loss is much greater in CUDA-accelerated builds. Therefore you should set outputEnergies to 100 or higher in the simulation config file. Some features are unavailable in CUDA builds, including alchemical free energy perturbation and the Lowe-Andersen thermostat.
As this is a new feature you are encouraged to test all simulations before beginning production runs. Forces evaluated on the GPU differ slightly from a CPU-only calculation, an effect more visible in reported scalar pressure values than in energies. <br.>…
Each namd2 thread can use only one GPU. Therefore you will need to run at least one thread for each GPU you want to use. Multiple threads can share a single GPU, usually with an increase in performance. NAMD will automatically distribute threads equally among the GPUs on a node.

To run on GPU nodes:

#PBS -l nodes=N:ppn=PPN:gpus=1
…
mpirun -machinfile $PBS_NODEFILE -np $PBS_NP \
     namd2-cuda +idlepoll file.namd
  • Test and optimize the N and PPN parameters for your situation. Start with nodes=1:ppn=4, increase up to ppn=12, and/or increase nodes > 1. Speedups from a GPU can be expected to be of order 4–5 ×.
  • The gpus=… modifier refers to GPUs per node and currently must always be 1.

Documentation

$ ls $NAMD_HOME/share/doc
README.txt  announce.txt  license.txt  notes.txt  ug.pdf

Benchmarks

grep Benchm *2.7*/*.o* | sort -k4,4n | _ -w
namd-2.7b1-gen2-1/namd-gen2-1.o248034:Info: Benchmark time: 1 CPUs 1.81607 s/step 21.0193 days/ns 241.219 MB memory
namd-2.7b1-gen2-1/namd-gen2-1.o248034:Info: Benchmark time: 1 CPUs 1.8238 s/step 21.1088 days/ns 240.773 MB memory
namd-2.7b1-gen2-4/namd-gen2-4.o248032:Info: Benchmark time: 4 CPUs 0.428028 s/step 4.95403 days/ns 90.7305 MB memory
namd-2.7b1-gen2-4/namd-gen2-4.o248032:Info: Benchmark time: 4 CPUs 0.430383 s/step 4.98129 days/ns 90.25 MB memory
namd-2.7b1-gen2-8/namd-gen2-8.o248035:Info: Benchmark time: 8 CPUs 0.568039 s/step 6.57453 days/ns 65.707 MB memory
namd-2.7b1-gen2-8/namd-gen2-8.o248035:Info: Benchmark time: 8 CPUs 0.887236 s/step 10.2689 days/ns 65.3945 MB memory
namd-2.7b1-gen2-8/namd-gen2-8.o248036:Info: Benchmark time: 8 CPUs 0.227086 s/step 2.62831 days/ns 64.2656 MB memory
namd-2.7b1-gen2-8/namd-gen2-8.o248036:Info: Benchmark time: 8 CPUs 0.229058 s/step 2.65113 days/ns 64.5469 MB memory
namd-2.7b1-gen2-16/namd-gen2-16.o248033:Info: Benchmark time: 16 CPUs 0.105355 s/step 1.21938 days/ns 50.6367 MB memory
namd-2.7b1-gen2-16/namd-gen2-16.o248033:Info: Benchmark time: 16 CPUs 0.105658 s/step 1.2229 days/ns 50.3477 MB memory

grep Benchm *2.9*/*.o* | sort -k4,4n | _ -w
namd-2.9-gen2-1/namd-gen2-1.o248004:Info: Benchmark time: 1 CPUs 1.58065 s/step 18.2946 days/ns 398.93 MB memory
namd-2.9-gen2-1/namd-gen2-1.o248004:Info: Benchmark time: 1 CPUs 1.58666 s/step 18.3641 days/ns 398.695 MB memory
namd-2.9-gen2-1/namd-gen2-1.o248011:Info: Benchmark time: 1 CPUs 1.79167 s/step 20.737 days/ns 398.926 MB memory
namd-2.9-gen2-1/namd-gen2-1.o248011:Info: Benchmark time: 1 CPUs 1.79819 s/step 20.8124 days/ns 398.695 MB memory
namd-2.9-gen2-4/namd-gen2-4.o248003:Info: Benchmark time: 4 CPUs 1.05068 s/step 12.1606 days/ns 254.582 MB memory
namd-2.9-gen2-4/namd-gen2-4.o248003:Info: Benchmark time: 4 CPUs 1.17972 s/step 13.6542 days/ns 254.582 MB memory
namd-2.9-gen2-4/namd-gen2-4.o248006:Info: Benchmark time: 4 CPUs 0.424272 s/step 4.91055 days/ns 269.023 MB memory
namd-2.9-gen2-4/namd-gen2-4.o248006:Info: Benchmark time: 4 CPUs 0.42632 s/step 4.93426 days/ns 269.023 MB memory
namd-2.9-gen2-8/namd-gen2-8.o247983:Info: Benchmark time: 8 CPUs 0.211136 s/step 2.4437 days/ns 242.109 MB memory
namd-2.9-gen2-8/namd-gen2-8.o247983:Info: Benchmark time: 8 CPUs 0.219128 s/step 2.53621 days/ns 242.109 MB memory
namd-2.9-gen2-16/namd-gen2-1.o248005:Info: Benchmark time: 16 CPUs 0.103843 s/step 1.20188 days/ns 226.25 MB memory
namd-2.9-gen2-16/namd-gen2-1.o248005:Info: Benchmark time: 16 CPUs 0.103924 s/step 1.20282 days/ns 225.961 MB memory