HPC/Applications/nwchem: Difference between revisions
Line 50: | Line 50: | ||
<pre> | <pre> | ||
scf | scf | ||
semidirect memsize | semidirect memsize 100 filesize 0 | ||
... | ... | ||
end | end | ||
</pre> | </pre> | ||
Thanks to J. Foley for pointing this out. | Thanks to J. Foley for pointing this out. | ||
* See also http://www.nwchem-sw.org/index.php/Development:TCE#Maximizing_performance | |||
<blockquote> | |||
For parallel jobs on clusters with poor disk performance [or file size requirements from concurrent jobs – stern] on the filesystem used for scratch_dir, | |||
it is a good idea to disable disk IO during the SCF stage of the calculation. | |||
This is done by adding semidirect memsize N filesize 0, where N is 80% of the stack memory divided by 8, | |||
as the value in this directive is the number of dwords, rather than bytes. | |||
With these settings, if the aggregate memory is sufficient to store the integrals, the SCF performance will be excellent, | |||
and it will be better than if direct is set in the SCF input block. | |||
If scratch_dir is set to a local disk, then one should use as much disk as is permissible, controlled by the value of filesize. | |||
On many high-performance computers, filling up the local scratch disk will crash the node, so one cannot be careless with these settings. | |||
In addition, on many such machines, the shared file system performance is better than that of the local disk (this is true for many NERSC systems [and Carbon – stern]). | |||
</blockquote> |
Revision as of 15:12, March 15, 2013
Introduction
Considerations on Carbon
Memory
The default memory
allocation is far smaller than useful on Carbon. Raise it using, e.g.:
memory total 3200 mb
Disk space
NWChem might use a camparably large amount of disk space in volumes and bandwidth not suitable for your $HOME directory. To avoid quota overruns, use either $SANDBOX or $TMPDIR for NWChem scratch. The *.nw input file is static, which makes it difficult to point to temporary directories via their environment vairables. As a solution, I suggest to create a symbolic link in the job script prior to running NWChem, and use the name of the job script in your *.nw file. See $NWCHEM_HOME/sample.job
for the full file. To the following steps:
(1) In the job script, set up a job-specific scratch directory as symlink named "scratch" in the job's working directory. Choose one of:
- $SANDBOX: distributed scratch (not auto-cleaned, high bandwidth but shared file system)
- $TMPDIR: node-local (auto-cleaned, slower)
cat $NWCHEM_HOME/sample.job
#!/bin/bash
#PBS ...
...
dir=$SANDBOX/$PBS_JOBID
#dir=$TMPDIR
link=scratch
mkdir -p $dir
ln -snf $dir $link
mpirun -machinefile $PBS_NODEFILE -np $PBS_NP \
nwchem foo.nw
# clean sandbox (won't be reached if job terminates for time)
[[ $dir =~ $SANDBOX/* ]] && rm -r $dir
(2) In the NWChem input file, add the line "scratch_dir ./scratch" near the beginning.
cat job.nw
title "foo structure" echo scratch_dir ./scratch geometry units angstroms ...
Trading disk space for CPU
By default, AO integrals are cached on disk. The delays incurred may well be too much for system sizes above several hundred basis functions. It may prove more efficient to recalculate the orbitals as needed, trading disk for CPU:
scf semidirect memsize 100 filesize 0 ... end
Thanks to J. Foley for pointing this out.
For parallel jobs on clusters with poor disk performance [or file size requirements from concurrent jobs – stern] on the filesystem used for scratch_dir, it is a good idea to disable disk IO during the SCF stage of the calculation. This is done by adding semidirect memsize N filesize 0, where N is 80% of the stack memory divided by 8, as the value in this directive is the number of dwords, rather than bytes. With these settings, if the aggregate memory is sufficient to store the integrals, the SCF performance will be excellent, and it will be better than if direct is set in the SCF input block.
If scratch_dir is set to a local disk, then one should use as much disk as is permissible, controlled by the value of filesize. On many high-performance computers, filling up the local scratch disk will crash the node, so one cannot be careless with these settings. In addition, on many such machines, the shared file system performance is better than that of the local disk (this is true for many NERSC systems [and Carbon – stern]).