HPC/Module Setup: Difference between revisions

From CNM Wiki
< HPC
Jump to navigation Jump to search
 
(18 intermediate revisions by the same user not shown)
Line 3: Line 3:
The package primarily modifies your $PATH and other environment variables.
The package primarily modifies your $PATH and other environment variables.


* [[HPC/Software/Modules | '''Current Carbon module catalog''']]
* [[HPC/Modules | '''Current Carbon module catalog''']]


To select packages for your use, place <code>module load ''name''</code> commands near the end of your <code>~/.bashrc</code> file.
To select packages for your use, place <code>module load ''name''</code> commands near the end of your <code>~/.bashrc</code> file.
Line 50: Line 50:
* For instance, to add binaries from a user's own compilation of <code>somepackage</code>, and load a couple of modules:
* For instance, to add binaries from a user's own compilation of <code>somepackage</code>, and load a couple of modules:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
export PATH=$HOME/somepackage/bin:$HOME/bin:$PATH
export PATH=$HOME/somepackage/bin:$HOME/bin:$PATH
# Load Carbon applications
module load name1 name2 …
module load name1 name2 …
module load name3/version
</syntaxhighlight>
</syntaxhighlight>
To test major changes, temporarily run a new shell. Inspect the environment variable $SHLVL to see the shell ''nesting'' level.
 
=== Test ===
To test the impact of any changes, temporarily run a new shell:
<syntaxhighlight lang="bash">
<syntaxhighlight lang="bash">
echo $SHLVL
bash -l        # lowercase letter L
bash -l        # lowercase letter L
# run other shell commands
...
# inspect nesting level, then return to the parent shell
echo $SHLVL
echo $SHLVL
# run other regular commands
...
# back to the original login shell
exit
exit
</syntaxhighlight>
</syntaxhighlight>
When all looks good, either log out and back in, or simply restart your login shell process in place (rather than as a child command):
* Make further corrections to ~/.bashrc as needed.
<syntaxhighlight lang="bash">
* You might get lost in multiple ''nested'' shell levels. To see the current nesting level, inspect the environment variable $SHLVL as shown above. A normal login shell runs at level '''1''', and the <code>exit</code> command in such a shell will log you out.
exec bash -l
 
</syntaxhighlight>
=== Pick up newly added modules ===
When all looks good, either:
* simply restart your shell in place:
: <code>exec bash -l</code>
:: Again, that's a lowercase letter "L".
* Alternatively, log out and back in, which is a bit more involved.
 
=== Caveats ===
* Among the various [http://www.gnu.org/software/bash/manual/html_node/Bash-Startup-Files.html bash startup files]  .bashrc is the one relevant [http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_01_02.html#sect_01_02_02_02_06 when invoked remotely], such as on MPI child nodes reached by sshd.
* Among the various [http://www.gnu.org/software/bash/manual/html_node/Bash-Startup-Files.html bash startup files]  .bashrc is the one relevant [http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_01_02.html#sect_01_02_02_02_06 when invoked remotely], such as on MPI child nodes reached by sshd.
* In general, '''do not place a <code>"module load …"</code> command in a PBS job script.'''
* In general, '''do not place a <code>"module load …"</code> command in a PBS job script.'''
: If you must, note that this command ''will only work for single-node'' jobs. It will generally ''fail for multi-node'' jobs (nodes &gt; 1). The reason is that the job script is only executed (by the <code>pbs_mom</code> daemon on your behalf) on the first core on the first node of your request. In general, this environment will be cloned for the other cores on the first node, but not for cores on other nodes. There are flags for <code>mpirun</code> or <code>mpiexec</code> to pass some or all environment variables to other MPI processes, but these flags are implementation-specific and may not work reliably.
** This will only work reliably for single-node jobs.
* The recommended way is as above, to place module commands in ~/.bashrc. This might preclude job-specific module sets for conflicting modules or tasks. I'm thinking about a proper solution for this. --[[User:Stern|stern]]
** It will generally ''fail for multi-node'' jobs (nodes &gt; 1).
**: Reason: The job script is only executed (by the <code>pbs_mom</code> daemon on your behalf) on the first core on the first node of your request. The environment of this process will be cloned for the other cores on the first node, but not for cores on other nodes. How remote environments are configured depends on the MPI implementation. The various <code>mpirun</code> or <code>mpiexec</code> offer flags to pass some or all environment variables to other MPI processes, but these flags are implementation-specific and may not work reliably.
* The most reliable and recommended way is as above, to place module commands in ~/.bashrc. This might preclude job-specific module sets for conflicting modules or tasks. I'm thinking about a proper solution for this. --[[User:Stern|stern]]


== Modules – General documentation ==
== Modules – General documentation ==
Line 99: Line 114:


== Module Conventions on Carbon ==
== Module Conventions on Carbon ==
 
See [[HPC/Module naming scheme 2008]]
* Most application software is installed under <code>'''/opt/soft'''</code>
* Package directories are named <code>''' ''name-version-build'' '''</code>, e.g. <code>/opt/soft/jmol-12.1.37-1</code>.
* Module names are organized by a mostly version-less name, with the version following after a slash: <code>''' ''name/version-build'' '''</code>. Using the <code>''name''</code> component alone is possible and will select a default version for a SubCommand to act upon. Some packages carry a major version number in their name, notably fftw3 and vasp5.
* <code>module help</code> briefly describes a package, gives its version number (in the module path) and will usually contain a link to its home page.
$ '''module help jmol'''
----------- Module Specific Help for 'jmol/12.0.34-1' -------------
Jmol is a molecule viewer platform for researchers in chemistry and
biochemistry, implemented in Java for multi-platform use.  This is the
standalone application.  It offers high-performance 3D rendering with
no hardware requirements and supports many popular file formats.
http://jmol.sourceforge.net/
http://wiki.jmol.org/
 
* Default modules are loaded by <code>/etc/profile.d/zz-moduleuse.sh</code>. (The strange name form ensures this profile segment is loaded last.) They currently are:
$ '''module list'''
Currently Loaded Modulefiles:
  1) moab/6.0.3-1              4) icc/11/11.1.073
  2) gold/2.1.11.0-4          5) ifort/11/11.1.073
  3) openmpi/1.4.3-intel11-1  6) mkl/10.2.6.038
<!-- OBSOLETE:
For now the only applications are the [[HPC/Carbon Cluster - Development tools| Development tools]].
 
Admin note:  The master copy of these files resides in <code>mgmt{01,02}:/opt/teamhpc/node-skel/etc/profile.d</code> and is distributed by <code>~root/bin/skeldistrib</code>.
-->
 
=== Package home directories ===
A package directory usually contains Unix-style subdirectories for the various files, which the modulefile usually automatically integrates into your user environment by means of standard environment variables.
; <code>bin/</code>: the main package executable and associated tools and utility scripts; added to <code>'''$PATH'''</code>.
; <code>lib/</code>: static and shared libraries; added to <code>'''$LD_LIBRARY_PATH'''</code> if it contains shared libs.
; <code>man/, share/man/, doc/, share/doc/</code>: man pages (added to <code>'''$MANPATH'''</code>) and human-readable documentation.
; <code>include/</code>: C header files and other script integration files for using library packages; added to <code>'''$INCLUDE'''</code>.
; … and others.:
 
Further, most modules will set a convenience variable <code>''' $''NAME''_HOME'''</code> which points to the toplevel directory. Dashes in the package name are converted to underscores. This is mostly useful to inspect documentation and auxiliary files:
$ '''module load quantum-espresso'''
<syntaxhighlight lang="bash">
$ ls -F $QUANTUM_ESPRESSO_HOME/doc
Doc/  atomic_doc/  examples/
</syntaxhighlight>
 
: … and to specifcy link paths in makefiles:
$ '''module load fftw3'''
 
<syntaxhighlight lang="make">
LDFLAGS += -L$(FFTW3_HOME)/lib
</syntaxhighlight>
 
=== Package-specific sample jobs ===
Packages that require more than the [[HPC/Submitting and Managing Jobs/Example Job Script | '''standard Carbon job template''']] contain a sample job in the toplevel directory:
<syntaxhighlight lang="bash">
module load quantum-espresso
cat $QUANTUM_ESPRESSO_HOME/*.job
</syntaxhighlight>
which gives:
<syntaxhighlight lang="bash">
#!/bin/bash
# Job template for Quantum ESPRESSO 4.x on Carbon
#
#PBS -l nodes=1:ppn=8
#PBS -l walltime=1:00:00
#PBS -N qe_jobname
#PBS -A cnm12345
#
# Output and error log:
#PBS -o job.out
#PBS -e job.err
#
## send mail at begin, end, abort, or never (b, e, a, n):
#PBS -m ea
 
cd $PBS_O_WORKDIR
 
# job-specific tmp directories -- each is node-local, wiped on exit
export ESPRESSO_TMPDIR=$TMPDIR
 
mpirun -x ESPRESSO_TMPDIR \
-machinefile  $PBS_NODEFILE \
-np $(wc -l < $PBS_NODEFILE) \
pw.x \
< input.txt > output.txt
</syntaxhighlight>


[[Category:HPC]]
[[Category:HPC]]

Latest revision as of 18:25, November 17, 2016

 Introduction

Carbon uses the Environment Modules package to dynamically provision software. The package primarily modifies your $PATH and other environment variables.

To select packages for your use, place module load name commands near the end of your ~/.bashrc file.

The Shell

The user shell on Carbon is bash.


A note on tcsh for long-time Unix/Linux users

[t]csh used to be the only vendor-agnostic and widely available shell with decent command line features. Bash is now much better positioned in this area, and offers consistent programming on the command line and in scripts. Therefore, tcsh is now available only on request (to me, stern), if you absolutely, positively, insist, and know what you're doing. (There will be a quiz.)

There are good reasons not to use the C shell, and classic wisdom states “csh programming considered harmful”. Even though using [t]csh merely for interactive purposes may appear tolerable, it is too tempting to set out from tweaking .cshrc into serious programming. Don't do it. It is not supported. It's dead, Jim.

  • Do not use chsh or chfn. Changes will be overwritten.

Shell Customizations

Place customizations in your ~/.bashrc file by using a text editor such as nano. A pristine copy is shown below, or can be found at /etc/skel/.bashrc

# ~/.bashrc
# User's bash customization, Carbon template; stern 2011-09-15.

# Merge global definitions -- do not edit.
if [ -f /etc/bashrc ]; then
    . /etc/bashrc
fi

# Carbon customizations -- edit below.
export PATH=$HOME/bin:$PATH

#alias blah="foo -xyz"
#module load ...
  • For instance, to add binaries from a user's own compilation of somepackage, and load a couple of modules:
export PATH=$HOME/somepackage/bin:$HOME/bin:$PATH

# Load Carbon applications
module load name1 name2 …
module load name3/version

Test

To test the impact of any changes, temporarily run a new shell:

bash -l        # lowercase letter L
# run other shell commands
...
# inspect nesting level, then return to the parent shell
echo $SHLVL
exit
  • Make further corrections to ~/.bashrc as needed.
  • You might get lost in multiple nested shell levels. To see the current nesting level, inspect the environment variable $SHLVL as shown above. A normal login shell runs at level 1, and the exit command in such a shell will log you out.

Pick up newly added modules

When all looks good, either:

  • simply restart your shell in place:
exec bash -l
Again, that's a lowercase letter "L".
  • Alternatively, log out and back in, which is a bit more involved.

Caveats

  • Among the various bash startup files .bashrc is the one relevant when invoked remotely, such as on MPI child nodes reached by sshd.
  • In general, do not place a "module load …" command in a PBS job script.
    • This will only work reliably for single-node jobs.
    • It will generally fail for multi-node jobs (nodes > 1).
      Reason: The job script is only executed (by the pbs_mom daemon on your behalf) on the first core on the first node of your request. The environment of this process will be cloned for the other cores on the first node, but not for cores on other nodes. How remote environments are configured depends on the MPI implementation. The various mpirun or mpiexec offer flags to pass some or all environment variables to other MPI processes, but these flags are implementation-specific and may not work reliably.
  • The most reliable and recommended way is as above, to place module commands in ~/.bashrc. This might preclude job-specific module sets for conflicting modules or tasks. I'm thinking about a proper solution for this. --stern

Modules – General documentation

  • A general introduction to modules can be found at many other sites, such as:
$ module help
…
  Usage: module [ switches ] [ subcommand ] [subcommand-args ]
…

  Available SubCommands and Args:

+ load		modulefile [modulefile ...]
+ unload	modulefile [modulefile ...]
+ switch	[modulefile1] modulefile2.]
+ list

+ avail		[modulefile [modulefile ...]]
+ whatis	[modulefile [modulefile ...]]
+ help		[modulefile [modulefile ...]]
+ show		modulefile [modulefile ..]

For full documentation, consult the manual page:

$ man module

Module Conventions on Carbon

See HPC/Module naming scheme 2008