next up previous contents index
Next: Running Parallel Jobs Up: How to Run TURBOMOLE Previous: Modules and Data Flow   Contents   Index


Parallel Runs

Some of the TURBOMOLE modules are parallelized using the message passing interface (MPI) for distributed and shared memory machines or, in the case of dscf and ricc2 also with OpenMP for shared memory machines. The list of parallelized programs includes presently:

Additional keywords neccessary for parallel runs with the MPI binaries are described in Chapter 15. However, those keywords do not have to be set by the users. When using the parallel version of TURBOMOLE, scripts are replacing the binaries. Those scripts prepare a usual input, run the necessary steps and automatically start the parallel programs. The users just have to set environment variables, see Sec. 3.2.1 below.

To use the OpenMP parallelization only an environment variable needs to be set. But to use this parallelization efficiently one should consider a few additional points, e.g. memory usage, which are described in Sec. 3.2.2.



Subsections
next up previous contents index
Next: Running Parallel Jobs Up: How to Run TURBOMOLE Previous: Modules and Data Flow   Contents   Index
TURBOMOLE