HPC/Module Setup: Difference between revisions
(12 intermediate revisions by the same user not shown) | |||
Line 3: | Line 3: | ||
The package primarily modifies your $PATH and other environment variables. | The package primarily modifies your $PATH and other environment variables. | ||
* [[HPC | * [[HPC/Modules | '''Current Carbon module catalog''']] | ||
To select packages for your use, place <code>module load ''name''</code> commands near the end of your <code>~/.bashrc</code> file. | To select packages for your use, place <code>module load ''name''</code> commands near the end of your <code>~/.bashrc</code> file. | ||
Line 50: | Line 50: | ||
* For instance, to add binaries from a user's own compilation of <code>somepackage</code>, and load a couple of modules: | * For instance, to add binaries from a user's own compilation of <code>somepackage</code>, and load a couple of modules: | ||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
export PATH=$HOME/somepackage/bin:$HOME/bin:$PATH | export PATH=$HOME/somepackage/bin:$HOME/bin:$PATH | ||
# Load Carbon applications | |||
module load name1 name2 … | module load name1 name2 … | ||
module load name3/version | |||
</syntaxhighlight> | </syntaxhighlight> | ||
To test | |||
=== Test === | |||
To test the impact of any changes, temporarily run a new shell: | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
bash -l # lowercase letter L | bash -l # lowercase letter L | ||
# run other shell commands | |||
... | |||
# inspect nesting level, then return to the parent shell | |||
echo $SHLVL | echo $SHLVL | ||
exit | exit | ||
</syntaxhighlight> | </syntaxhighlight> | ||
When all looks good, either | * Make further corrections to ~/.bashrc as needed. | ||
< | * You might get lost in multiple ''nested'' shell levels. To see the current nesting level, inspect the environment variable $SHLVL as shown above. A normal login shell runs at level '''1''', and the <code>exit</code> command in such a shell will log you out. | ||
exec bash -l | |||
</ | === Pick up newly added modules === | ||
When all looks good, either: | |||
* simply restart your shell in place: | |||
: <code>exec bash -l</code> | |||
:: Again, that's a lowercase letter "L". | |||
* Alternatively, log out and back in, which is a bit more involved. | |||
=== Caveats === | |||
* Among the various [http://www.gnu.org/software/bash/manual/html_node/Bash-Startup-Files.html bash startup files] .bashrc is the one relevant [http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_01_02.html#sect_01_02_02_02_06 when invoked remotely], such as on MPI child nodes reached by sshd. | * Among the various [http://www.gnu.org/software/bash/manual/html_node/Bash-Startup-Files.html bash startup files] .bashrc is the one relevant [http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_01_02.html#sect_01_02_02_02_06 when invoked remotely], such as on MPI child nodes reached by sshd. | ||
* In general, '''do not place a <code>"module load …"</code> command in a PBS job script.''' | * In general, '''do not place a <code>"module load …"</code> command in a PBS job script.''' | ||
** This will only work reliably for single-node jobs. | ** This will only work reliably for single-node jobs. | ||
** It will generally ''fail for multi-node'' jobs (nodes > 1). | ** It will generally ''fail for multi-node'' jobs (nodes > 1). | ||
**: Reason: The job script is only executed (by the <code>pbs_mom</code> daemon on your behalf) on the first core on the first node of your request. The environment of this process will be cloned for the other cores on the first node, but not for cores on other nodes. | **: Reason: The job script is only executed (by the <code>pbs_mom</code> daemon on your behalf) on the first core on the first node of your request. The environment of this process will be cloned for the other cores on the first node, but not for cores on other nodes. How remote environments are configured depends on the MPI implementation. The various <code>mpirun</code> or <code>mpiexec</code> offer flags to pass some or all environment variables to other MPI processes, but these flags are implementation-specific and may not work reliably. | ||
* The recommended way is as above, to place module commands in ~/.bashrc. This might preclude job-specific module sets for conflicting modules or tasks. I'm thinking about a proper solution for this. --[[User:Stern|stern]] | * The most reliable and recommended way is as above, to place module commands in ~/.bashrc. This might preclude job-specific module sets for conflicting modules or tasks. I'm thinking about a proper solution for this. --[[User:Stern|stern]] | ||
== Modules – General documentation == | == Modules – General documentation == | ||
Line 101: | Line 114: | ||
== Module Conventions on Carbon == | == Module Conventions on Carbon == | ||
See [[HPC/Module naming scheme 2008]] | |||
[[Category:HPC]] | [[Category:HPC]] |
Latest revision as of 18:25, November 17, 2016
Introduction
Carbon uses the Environment Modules package to dynamically provision software. The package primarily modifies your $PATH and other environment variables.
To select packages for your use, place module load name
commands near the end of your ~/.bashrc
file.
The Shell
The user shell on Carbon is bash.
A note on tcsh for long-time Unix/Linux users
[t]csh used to be the only vendor-agnostic and widely available shell with decent command line features. Bash is now much better positioned in this area, and offers consistent programming on the command line and in scripts. Therefore, tcsh is now available only on request (to me, stern), if you absolutely, positively, insist, and know what you're doing. (There will be a quiz.)
There are good reasons not to use the C shell,
and classic wisdom states “csh programming considered harmful”.
Even though using [t]csh merely for interactive purposes may appear tolerable,
it is too tempting to set out from tweaking .cshrc
into serious programming.
Don't do it. It is not supported. It's dead, Jim.
- Do not use
chsh
orchfn
. Changes will be overwritten.
Shell Customizations
Place customizations in your ~/.bashrc
file by using a text editor such as nano
. A pristine copy is shown below, or can be found at /etc/skel/.bashrc
# ~/.bashrc
# User's bash customization, Carbon template; stern 2011-09-15.
# Merge global definitions -- do not edit.
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# Carbon customizations -- edit below.
export PATH=$HOME/bin:$PATH
#alias blah="foo -xyz"
#module load ...
- For instance, to add binaries from a user's own compilation of
somepackage
, and load a couple of modules:
export PATH=$HOME/somepackage/bin:$HOME/bin:$PATH
# Load Carbon applications
module load name1 name2 …
module load name3/version
Test
To test the impact of any changes, temporarily run a new shell:
bash -l # lowercase letter L
# run other shell commands
...
# inspect nesting level, then return to the parent shell
echo $SHLVL
exit
- Make further corrections to ~/.bashrc as needed.
- You might get lost in multiple nested shell levels. To see the current nesting level, inspect the environment variable $SHLVL as shown above. A normal login shell runs at level 1, and the
exit
command in such a shell will log you out.
Pick up newly added modules
When all looks good, either:
- simply restart your shell in place:
exec bash -l
- Again, that's a lowercase letter "L".
- Alternatively, log out and back in, which is a bit more involved.
Caveats
- Among the various bash startup files .bashrc is the one relevant when invoked remotely, such as on MPI child nodes reached by sshd.
- In general, do not place a
"module load …"
command in a PBS job script.- This will only work reliably for single-node jobs.
- It will generally fail for multi-node jobs (nodes > 1).
- Reason: The job script is only executed (by the
pbs_mom
daemon on your behalf) on the first core on the first node of your request. The environment of this process will be cloned for the other cores on the first node, but not for cores on other nodes. How remote environments are configured depends on the MPI implementation. The variousmpirun
ormpiexec
offer flags to pass some or all environment variables to other MPI processes, but these flags are implementation-specific and may not work reliably.
- Reason: The job script is only executed (by the
- The most reliable and recommended way is as above, to place module commands in ~/.bashrc. This might preclude job-specific module sets for conflicting modules or tasks. I'm thinking about a proper solution for this. --stern
Modules – General documentation
$ module help … Usage: module [ switches ] [ subcommand ] [subcommand-args ] … Available SubCommands and Args: + load modulefile [modulefile ...] + unload modulefile [modulefile ...] + switch [modulefile1] modulefile2.] + list + avail [modulefile [modulefile ...]] + whatis [modulefile [modulefile ...]] + help [modulefile [modulefile ...]] + show modulefile [modulefile ..]
For full documentation, consult the manual page:
$ man module