HPC/Applications/nwchem: Difference between revisions

From CNM Wiki
Jump to navigation Jump to search
mNo edit summary
mNo edit summary
 
(15 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Introduction ==
== Introduction ==
* [http://www.nwchem-sw.org/index.php/Release61:NWChem_Documentation NWChem 6.1 User Documentation]
* [http://www.nwchem-sw.org/index.php/NWChem_Documentation NWChem User Documentation]


== Considerations on Carbon ==
== Considerations on Carbon ==
=== Memory ===
=== Memory ===
* [http://www.nwchem-sw.org/index.php/Top-level#MEMORY MEMORY]
The default memory allocation is far smaller than useful on Carbon. Raise it using the <code>memory</code> keyword.
The specification is per process. For exmple, regular gen2 nodes (not :bigmem) have 24 GB RAM; then ppn=8 and some leeway leads us to
memory total 2800 mb
=== Disk space ===
=== Disk space ===
; [http://www.nwchem-sw.org/index.php/Release61:Top-level#SCRATCH_DIR_.2F_PERMANENT_DIR SCRATCH_DIR]: NWChem might use a camparably large amount of disk space in volumes and bandwidth not suitable for your $HOME directory. To avoid quota overruns, use either $SANDBOX or $TMPDIR for NWChem scratch. The *.nw input file is static, which makes it difficult to point to temporary directories via their environment vairables. As a solution, I suggest to create a ''symbolic link'' in the job script prior to running NWChem, and use the name of the job script in your *.nw file. See <code>$NWCHEM_HOME/sample.job</code> for the full file. To the following steps:
* [http://www.nwchem-sw.org/index.php/Top-level#SCRATCH_DIR_.2F_PERMANENT_DIR SCRATCH_DIR]
# In the job script, set up a job-specific scratch directory as symlink named "scratch" in the job's working directory. Choose one of:
NWChem might use a camparably large amount of disk space in volumes and bandwidth not suitable for your $HOME directory. To avoid quota overruns, use either $SANDBOX or $TMPDIR for NWChem scratch. The *.nw input file is static, which makes it difficult to point to temporary directories via their environment vairables. As a solution, I suggest to create a ''symbolic link'' in the job script prior to running NWChem, and use the name of the job script in your *.nw file. See <code>$NWCHEM_HOME/sample.job</code> for the full file. To the following steps:
#*$SANDBOX: distributed scratch (not auto-cleaned, high bandwidth but shared file system)
 
#* $TMPDIR: node-local (auto-cleaned, slower)
(1) In the NWChem input file, add the line "scratch_dir ./scratch" near the beginning.
# In the NWChem input file, add the line "scratch_dir ./scratch" near the beginning.
cat job.nw
<pre>
title "foo structure"
echo
scratch_dir ./scratch
geometry units angstroms
...
</pre>
 
(2) In the job script, set up a job-specific scratch directory and make it available to NWChem as a symlink in the job's working directory. Choose one of:
:*$SANDBOX: distributed scratch (not auto-cleaned, high bandwidth but shared file system)
:* $TMPDIR: node-local (auto-cleaned, slower)
  cat $NWCHEM_HOME/sample.job
  cat $NWCHEM_HOME/sample.job
<source lang="bash">
<source lang="bash">
Line 16: Line 32:
...
...


dir=$SANDBOX/$PBS_JOBID
# Avoid quota overruns.
# (1) Add the line "scratch_dir ./scratch" to the input file.
# (2) Set up an appropriate directory and provide a local symlink; choose one of:
#  - $SANDBOX: distributed scratch on shared file system (clean up here)
#  - $TMPDIR: node-local (auto-cleaned)
 
dir=$SANDBOX/$PBS_JOBID; mkdir -p $dir; trap "rm -r $dir" EXIT
#dir=$TMPDIR
#dir=$TMPDIR


link=scratch
ln -snf $dir ./scratch
mkdir -p $dir
ln -snf $dir $link


mpirun -machinefile $PBS_NODEFILE -np $PBS_NP \
mpirun -machinefile $PBS_NODEFILE -np $PBS_NP \
         nwchem foo.nw
         nwchem foo.nw
</source>


# clean sandbox (won't be reached if job terminates for time)
=== Trading disk space for CPU ===
[[ $dir =~ $SANDBOX/* ]] && rm -r $dir
* [http://www.nwchem-sw.org/index.php/Development:Density_Functional_Theory_for_Molecules#DIRECT.2C_SEMIDIRECT_and_NOIO_--_Hardware_Resource_Control SEMIDIRECT – Hardware Resource Control]
</source>
By default, AO integrals are cached on disk. The delays incurred may well be too much for system sizes above several hundred basis functions. It may prove more efficient to recalculate the orbitals as needed, trading disk for CPU:
cat job.nw
<pre>
<source lang="bash">
scf
title "foo structure"
semidirect memsize 100000000 filesize 0
echo
...
scratch_dir ./scratch
end
geometry units angstroms
</pre>
...
Thanks to J. Foley for pointing this out. Note that unlike for the <code>MEMORY</code> directive unit suffixes are not supported here -- the quantity is in "words", with 1 word = 8 bytes these days.
</source>
* See also http://www.nwchem-sw.org/index.php/Development:TCE#Maximizing_performance
; [http://www.nwchem-sw.org/index.php/Release61:Top-level#MEMORY MEMORY]: The default <code>memory</code> allocation is far smaller than useful on Carbon. Raise it using, e.g.:
<blockquote>
memory total 3200 mb
For parallel jobs on clusters with poor disk performance [or file size requirements from concurrent jobs – stern] on the filesystem used for scratch_dir,
;
it is a good idea to disable disk IO during the SCF stage of the calculation.
This is done by adding semidirect memsize N filesize 0, where N is 80% of the stack memory divided by 8,
as the value in this directive is the number of dwords, rather than bytes.
With these settings, if the aggregate memory is sufficient to store the integrals, the SCF performance will be excellent,
and it will be better than if direct is set in the SCF input block.
 
If scratch_dir is set to a local disk, then one should use as much disk as is permissible, controlled by the value of filesize.
On many high-performance computers, filling up the local scratch disk will crash the node, so one cannot be careless with these settings.
In addition, on many such machines, the shared file system performance is better than that of the local disk (this is true for many NERSC systems [and Carbon – stern]).
</blockquote>

Latest revision as of 21:24, August 5, 2015

Introduction

Considerations on Carbon

Memory

The default memory allocation is far smaller than useful on Carbon. Raise it using the memory keyword. The specification is per process. For exmple, regular gen2 nodes (not :bigmem) have 24 GB RAM; then ppn=8 and some leeway leads us to

memory total 2800 mb

Disk space

NWChem might use a camparably large amount of disk space in volumes and bandwidth not suitable for your $HOME directory. To avoid quota overruns, use either $SANDBOX or $TMPDIR for NWChem scratch. The *.nw input file is static, which makes it difficult to point to temporary directories via their environment vairables. As a solution, I suggest to create a symbolic link in the job script prior to running NWChem, and use the name of the job script in your *.nw file. See $NWCHEM_HOME/sample.job for the full file. To the following steps:

(1) In the NWChem input file, add the line "scratch_dir ./scratch" near the beginning.

cat job.nw
title "foo structure"
echo
scratch_dir ./scratch
geometry units angstroms
...

(2) In the job script, set up a job-specific scratch directory and make it available to NWChem as a symlink in the job's working directory. Choose one of:

  • $SANDBOX: distributed scratch (not auto-cleaned, high bandwidth but shared file system)
  • $TMPDIR: node-local (auto-cleaned, slower)
cat $NWCHEM_HOME/sample.job
#!/bin/bash
#PBS ...
...

# Avoid quota overruns. 
# (1) Add the line "scratch_dir ./scratch" to the input file.
# (2) Set up an appropriate directory and provide a local symlink; choose one of:
#   - $SANDBOX: distributed scratch on shared file system (clean up here)
#   - $TMPDIR: node-local (auto-cleaned)

dir=$SANDBOX/$PBS_JOBID; mkdir -p $dir; trap "rm -r $dir" EXIT
#dir=$TMPDIR

ln -snf $dir ./scratch

mpirun -machinefile $PBS_NODEFILE -np $PBS_NP \
         nwchem foo.nw

Trading disk space for CPU

By default, AO integrals are cached on disk. The delays incurred may well be too much for system sizes above several hundred basis functions. It may prove more efficient to recalculate the orbitals as needed, trading disk for CPU:

scf 
 semidirect memsize 100000000 filesize 0
 ...
end

Thanks to J. Foley for pointing this out. Note that unlike for the MEMORY directive unit suffixes are not supported here -- the quantity is in "words", with 1 word = 8 bytes these days.

For parallel jobs on clusters with poor disk performance [or file size requirements from concurrent jobs – stern] on the filesystem used for scratch_dir, it is a good idea to disable disk IO during the SCF stage of the calculation. This is done by adding semidirect memsize N filesize 0, where N is 80% of the stack memory divided by 8, as the value in this directive is the number of dwords, rather than bytes. With these settings, if the aggregate memory is sufficient to store the integrals, the SCF performance will be excellent, and it will be better than if direct is set in the SCF input block.

If scratch_dir is set to a local disk, then one should use as much disk as is permissible, controlled by the value of filesize. On many high-performance computers, filling up the local scratch disk will crash the node, so one cannot be careless with these settings. In addition, on many such machines, the shared file system performance is better than that of the local disk (this is true for many NERSC systems [and Carbon – stern]).