HPC/Carbon Cluster - Development tools: Difference between revisions
< HPC
Jump to navigation
Jump to search
m (→Intel MPI) |
m (→Intel MPI) |
||
Line 53: | Line 53: | ||
</syntaxhighlight> | </syntaxhighlight> | ||
* The '''openmpi''' and '''impi''' modules can '''coexist at runtime''' if: | * The '''openmpi''' and '''impi''' modules can '''coexist at runtime''' if: | ||
** impi is loaded first and is called using <code>mpiexec.hydra</code> | ** impi is loaded first and is called using <code>mpiexec.hydra</code>. | ||
** | ** openmpi is loaded second and is called using <code>mpirun</code>. | ||
** you don't attempt to compile software using the convenience wrappers like <code>mpicc, mpif90,</code> etc. For these wrappers, the last module loaded will be active. It may be possible to use full paths like <code>$IMPI_HOME/bin/mpif90</code>, but this has not been tested fully. | ** you don't attempt to compile software using the convenience wrappers like <code>mpicc, mpif90,</code> etc. For these wrappers, the last module loaded will be active. It may be possible to use full paths like <code>$IMPI_HOME/bin/mpif90</code>, but this has not been tested fully. | ||
: Since openmpi is loaded by default, use the following sequence in ~/.bashrc: | |||
module unload openmpi | |||
module load impi | |||
module load openmpi | |||
===[http://www-unix.mcs.anl.gov/mpi/mpich1/ MPICH]/[http://www.mcs.anl.gov/research/projects/mpich2/ MPICH2]=== | ===[http://www-unix.mcs.anl.gov/mpi/mpich1/ MPICH]/[http://www.mcs.anl.gov/research/projects/mpich2/ MPICH2]=== |
Revision as of 20:30, June 3, 2011
This is an overview of the compilers and MPI libraries available on Carbon. Each package name leads to its documentation.
- See also
- Module catalog.
Compilers
GNU family
Intel Software Development Products
As of 2011, we have the following tools under support within the Intel® Composer XE 2011 for Linux bundle.
- Intel C/C++
- Intel Fortran
- Math Kernel Library -- see also HPC/Software/Modules/mkl
- Integrated Performance Primitives Documentation (IPP)
- Threading Building Blocks Documentation (TBB)
- Debugger (idb)
Documentation is installed locally mainly under:
$ICC_HOME/Documentation
$ ls -F $ICC_HOME/Documentation/en_US Release_NotesC.pdf compiler_c/ documentation_f.htm getting_started_f.pdf lgpltext Release_NotesF.pdf compiler_f/ flicense idb/ mkl/ clicense documentation_c.htm getting_started_c.pdf ipp/ tbb/
No longer maintained on Carbon:
MPI
OpenMPI
This is the primary and recommended MPI version. It supports Ethernet and InfiniBand interconnect, with InfiniBand set as default by means of an OMPI_MCA_…
environment variable.
Intel MPI
- Intel MPI Documentation
- I recommend to use the new Hydra MPI process manager. See sec. 2.4 Scalable Process Management System in the Reference Manual. This avoids the need to set up and tear down the older and considerably less stable MPD manager.
mpiexec.hydra \
-machinefile $PBS_NODEFILE \
-np $(wc -l < $PBS_NODEFILE) \
./a.out
- The openmpi and impi modules can coexist at runtime if:
- impi is loaded first and is called using
mpiexec.hydra
. - openmpi is loaded second and is called using
mpirun
. - you don't attempt to compile software using the convenience wrappers like
mpicc, mpif90,
etc. For these wrappers, the last module loaded will be active. It may be possible to use full paths like$IMPI_HOME/bin/mpif90
, but this has not been tested fully.
- impi is loaded first and is called using
- Since openmpi is loaded by default, use the following sequence in ~/.bashrc:
module unload openmpi module load impi module load openmpi
MPICH/MPICH2
- Not installed explicitly, but Intel-MPI is based on MPICH2 and provides an MPICH2 runtime environment for precompiled binaries.