HPC/Carbon Cluster - Development tools: Difference between revisions

From CNM Wiki
< HPC
Jump to navigation Jump to search
Line 52: Line 52:
./a.out
./a.out
</syntaxhighlight>
</syntaxhighlight>
* The '''openmpi''' and '''impi''' modules can '''coexist at runtime''' if:
** impi is loaded first and is called using <code>mpiexec.hydra</code>,
** ompi is loaded second and is called using <code>mpirun</code>,
** you don't attempt to compile software using the convenience wrappers like <code>mpicc, mpif90,</code> etc. For these wrappers, the last module loaded will be active. It may be possible to use full paths like <code>$IMPI_HOME/bin/mpif90</code>, but this has not been tested fully.


===[http://www-unix.mcs.anl.gov/mpi/mpich1/ MPICH]/[http://www.mcs.anl.gov/research/projects/mpich2/ MPICH2]===
===[http://www-unix.mcs.anl.gov/mpi/mpich1/ MPICH]/[http://www.mcs.anl.gov/research/projects/mpich2/ MPICH2]===

Revision as of 20:24, June 3, 2011

This is an overview of the compilers and MPI libraries available on Carbon. Each package name leads to its documentation.

See also
Module catalog.

Compilers

GNU family

Intel Software Development Products

As of 2011, we have the following tools under support within the Intel® Composer XE 2011 for Linux bundle.

Documentation is installed locally mainly under:

$ICC_HOME/Documentation
$ ls -F $ICC_HOME/Documentation/en_US
Release_NotesC.pdf  compiler_c/          documentation_f.htm    getting_started_f.pdf  lgpltext
Release_NotesF.pdf  compiler_f/          flicense               idb/                   mkl/
clicense            documentation_c.htm  getting_started_c.pdf  ipp/                   tbb/

No longer maintained on Carbon:

MPI

OpenMPI

This is the primary and recommended MPI version. It supports Ethernet and InfiniBand interconnect, with InfiniBand set as default by means of an OMPI_MCA_… environment variable.

Intel MPI

  • Intel MPI Documentation
  • I recommend to use the new Hydra MPI process manager. See sec. 2.4 Scalable Process Management System in the Reference Manual. This avoids the need to set up and tear down the older and considerably less stable MPD manager.
 mpiexec.hydra \
	-machinefile  $PBS_NODEFILE \
	-np $(wc -l < $PBS_NODEFILE) \
	./a.out
  • The openmpi and impi modules can coexist at runtime if:
    • impi is loaded first and is called using mpiexec.hydra,
    • ompi is loaded second and is called using mpirun,
    • you don't attempt to compile software using the convenience wrappers like mpicc, mpif90, etc. For these wrappers, the last module loaded will be active. It may be possible to use full paths like $IMPI_HOME/bin/mpif90, but this has not been tested fully.

MPICH/MPICH2

  • Not installed explicitly, but Intel-MPI is based on MPICH2 and provides an MPICH2 runtime environment for precompiled binaries.