HPC/Module Setup: Difference between revisions

From CNM Wiki
< HPC
Jump to navigation Jump to search
m (made Intro bracketing)
Line 1: Line 1:
== Introduction ==
''Carbon'' uses the [http://modules.sourceforge.net/ Environment Modules] package to dynamically provision software.
The package primarily modifies your $PATH and other environment variables.
* [[HPC/Software/Catalog | '''Current Carbon module catalog''']]
To select packages for your use, place <code>module load ''name''</code> commands near the end of your <code>~/.bashrc</code> file.
== Shell startup files ==
== Shell startup files ==
* The default login shell on ''Carbon'' is [http://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]
* The default login shell on ''Carbon'' is [http://en.wikipedia.org/wiki/Bash_(Unix_shell) bash]
* tcsh only if you insist. It is not supported.  In fact: [http://www.faqs.org/faqs/unix-faq/shell/csh-whynot/ “csh programming considered harmful”] – [http://en.wikipedia.org/wiki/Tom_Christiansen Tom Christiansen]
* tcsh only if you insist. It is not supported.  In fact: [http://www.faqs.org/faqs/unix-faq/shell/csh-whynot/ “csh programming considered harmful”] – [http://en.wikipedia.org/wiki/Tom_Christiansen Tom Christiansen]


* Place customizations in your file <code>~/.bashrc</code>
* Place customizations in your file <code>~/.bashrc</code>, for example:
 
  # .bashrc
  # .bashrc
   
   
Line 15: Line 22:
  '''module load ''name1 name2 …'' '''
  '''module load ''name1 name2 …'' '''


; Warning:
; Note:
:* In general, '''do not place a <code>"module load …"</code> command in a PBS job script.'''
:* In general, '''do not place a <code>"module load …"</code> command in a PBS job script.'''
:: If you must, note that this command ''will only work for single-node'' jobs. It will generally ''fail for multi-node'' jobs (nodes &gt; 1). The reason is that the job script is only executed (by the <code>pbs_mom</code> daemon on your behalf) on the first core on the first node of your request. In general, this environment will be cloned for the other cores on the first node, but not for cores on other nodes. There are flags for <code>mpirun</code> or <code>mpiexec</code> to pass some or all environment variables to other MPI processes, but these flags are implementation-specific and may not work reliably.
:: If you must, note that this command ''will only work for single-node'' jobs. It will generally ''fail for multi-node'' jobs (nodes &gt; 1). The reason is that the job script is only executed (by the <code>pbs_mom</code> daemon on your behalf) on the first core on the first node of your request. In general, this environment will be cloned for the other cores on the first node, but not for cores on other nodes. There are flags for <code>mpirun</code> or <code>mpiexec</code> to pass some or all environment variables to other MPI processes, but these flags are implementation-specific and may not work reliably.
:* The recommended way is as above, to place module commands in ~/.bashrc. This might preclude job-specific module sets for conflicting modules or tasks. I'm thinking about a proper solution for this.
:* The recommended way is as above, to place module commands in ~/.bashrc. This might preclude job-specific module sets for conflicting modules or tasks. I'm thinking about a proper solution for this.


== Modules – Introduction ==
== Modules – General documentation ==
''Carbon'' uses the [http://modules.sourceforge.net/ Environment Modules] package to dynamically provision software.
The package primarily modifies your $PATH and other environment variables.


* [[HPC/Software/Catalog | '''Current Carbon module catalog''']]
* A general introduction to modules can be found at many other sites, such as:
* A general introduction to modules can be found at many other sites, such as:
** [http://www.nersc.gov/users/software/nersc-user-environment/modules/ NERSC]
** [http://www.nersc.gov/users/software/nersc-user-environment/modules/ NERSC]

Revision as of 15:36, May 19, 2011

 Introduction

Carbon uses the Environment Modules package to dynamically provision software. The package primarily modifies your $PATH and other environment variables.

To select packages for your use, place module load name commands near the end of your ~/.bashrc file.

Shell startup files

  • Place customizations in your file ~/.bashrc, for example:
# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
        . /etc/bashrc
fi

export PATH=$HOME/mypackage/bin:$PATH
module load name1 name2 … 
Note
  • In general, do not place a "module load …" command in a PBS job script.
If you must, note that this command will only work for single-node jobs. It will generally fail for multi-node jobs (nodes > 1). The reason is that the job script is only executed (by the pbs_mom daemon on your behalf) on the first core on the first node of your request. In general, this environment will be cloned for the other cores on the first node, but not for cores on other nodes. There are flags for mpirun or mpiexec to pass some or all environment variables to other MPI processes, but these flags are implementation-specific and may not work reliably.
  • The recommended way is as above, to place module commands in ~/.bashrc. This might preclude job-specific module sets for conflicting modules or tasks. I'm thinking about a proper solution for this.

Modules – General documentation

  • A general introduction to modules can be found at many other sites, such as:
$ module help
…
  Usage: module [ switches ] [ subcommand ] [subcommand-args ]
…

  Available SubCommands and Args:

+ load		modulefile [modulefile ...]
+ unload	modulefile [modulefile ...]
+ switch	[modulefile1] modulefile2.]
+ list

+ avail		[modulefile [modulefile ...]]
+ whatis	[modulefile [modulefile ...]]
+ help		[modulefile [modulefile ...]]
+ show		modulefile [modulefile ..]

For full documentation, consult the manual page:

$ man module

Module Conventions on Carbon

  • Most application software is installed under /opt/soft/
  • Package directories are named name-version-build , e.g. /opt/soft/jmol-12.1.37-1.
  • Module names are organized by a mostly version-less name, with the version following after a slash: name/version-build . Using the name alone is possible and will select a default version for a SubCommand to act upon. Some packages carry a major version number in their name, notably fftw3 and vasp5.
  • module help briefly describes a package and will usually contain a link to its home page.
$ module help jmol

----------- Module Specific Help for 'jmol/12.0.34-1' -------------

	Jmol is a molecule viewer platform for researchers in chemistry and
	biochemistry, implemented in Java for multi-platform use.  This is the
	standalone application.  It offers high-performance 3D rendering with
	no hardware requirements and supports many popular file formats.

	http://jmol.sourceforge.net/
	http://wiki.jmol.org/
  • Default modules are loaded by /etc/profile.d/zz-moduleuse.sh. (The strange name form ensures this profile segment is loaded last.)
$ module list
Currently Loaded Modulefiles:
  1) moab/6.0.3-1              4) icc/11/11.1.073
  2) gold/2.1.11.0-4           5) ifort/11/11.1.073
  3) openmpi/1.4.3-intel11-1   6) mkl/10.2.6.038


Package home directories

Most modules will set a convenience variable NAME_HOME which points to the toplevel directory. Dashes in the package name are converted to underscores. This is mostly useful to inspect documentation and auxiliary files:

$ module load quantum-espresso
$ ls -F $QUANTUM_ESPRESSO_HOME/doc
Doc/  atomic_doc/  examples/
… and to specifcy link paths in makefiles:
$ module load fftw3
LDFLAGS += -L$FFTW3_HOME/lib/

Package-specific sample jobs

Packages that require more than the standard Carbon job template contain a sample job in the toplevel directory:

$ module load quantum-espresso
$ cat $QUANTUM_ESPRESSO_HOME/*.job
#!/bin/bash
# Job template for Quantum ESPRESSO 4.x on Carbon
#
#PBS -l nodes=1:ppn=8
#PBS -l walltime=1:00:00
…
export ESPRESSO_TMPDIR=$TMPDIR
mpirun -x ESPRESSO_TMPDIR \
	-machinefile $PBS_NODEFILE \
	-np `wc -l < $PBS_NODEFILE` \
	pw.x \
	< input.txt > output.txt