OpenFOAM: Difference between revisions
Line 26: | Line 26: | ||
mpirun <foamExec> <otherArgs> -parallel | tee logfile | mpirun <foamExec> <otherArgs> -parallel | tee logfile | ||
You needn't (and shouldn't) specify a hostfile or number of processors; mpirun will automatically use $PBS_NODEFILE as the hostfile | You needn't (and shouldn't) specify a hostfile or number of processors; mpirun will automatically use $PBS_NODEFILE as the hostfile. | ||
=== Pre- and post-processing === | === Pre- and post-processing === |
Revision as of 20:17, June 10, 2021
Getting started
To use OpenFOAM, load an OpenFOAM module:
module load openfoam
to always get the latest stable version. The OpenFOAM modules will load the appropriate version of MPI for you. Loading any other MPI modules either before or after an OpenFOAM module is likely to cause problems.
Unless otherwise noted in the module name, OpenFOAM will be compiled with GCC and Intel MPI for performance reasons.
To load a number of convenient aliases into your environment, add the appropriate line below to your shell configuration files:
source $FOAM_INST_DIR/OpenFOAM-1.6/etc/aliases.sh source $FOAM_INST_DIR/OpenFOAM-1.6/etc/aliases.csh
The following job script should be sufficient to get you started with running parallel jobs:
#!/bin/bash . /etc/profile.d/modules.sh module load openfoam cd $PBS_O_WORKDIR # don't use -np or -machinefile mpirun <foamExec> <otherArgs> -parallel | tee logfile
You needn't (and shouldn't) specify a hostfile or number of processors; mpirun will automatically use $PBS_NODEFILE as the hostfile.
Pre- and post-processing
Resource-intensive pre- and post-processing should be done on a compute node, not on a login node. See our info about pre- and post-processing for STAR-CCM+ for how to run interactive and graphical applications on compute nodes.
Documentation
The official OpenFOAM documentation is available at OpenFOAM. The OpenFOAM Wiki is also useful.