next up previous contents index
Next: Spin-component scaling approaches (SCS/SOS) Up: Second-Order Approximate Coupled-Cluster (CC2) Previous: Transition Moments   Contents   Index


Parallel RI-MP2 and RI-CC2 Calculations

The ricc2 program is partially parallized for distributed memory architectures (e.g. clusters of Linux boxes) based on the message passing interface (MPI) standard. In the present version parallel calculations can be carried out for ground state and excitation energies for all wavefunction models available in ricc2. The analytic gradients for RI-MP2 and RI-CC2 in the ground state and RI-CC2 in excited states are also parallized.

While in general the parallel execution of ricc2 works similar to that of other parallized Turbomole modules (as e.g. dscf and grad), there are some important difference concerning in particular the handling of the large scratch files needed for RI-CC2 (or RI-MP2). As the parallel version dscf also the parallel version of ricc2 assumes that the program is started in a directory which is readable (and writable) on all compute nodes under the same path (e.g. a NFS directory). The directory must contain all input files and will at the end of a calculation contain all output files. Large scratch files (e.g. for integral intermediates) will be placed under the path specified in the control file with $tmpdir (see Section 15.2.13) which should point to a directory in a file system with a good performance. The parallel version of the ricc2 program can presently account for the following two situations:

Clusters with single processor nodes and local disks:
Specify in $tmpdir a directory in the file system on the local disk. All large files will be places on the nodes in these file systems. (The local file system must have the same name on all nodes)
Clusters with multiple (e.g. dual) processor nodes and local disks
Set in
addition to $tmpdir the keyword $sharedtmpdir to indicate that several processes might share the same local disk. The program will than create in s the directory given in $tmpdir subdirectories with node-specific names.
Note that at the end of a ricc2 run the scratch directories specified with $tmpdir are not guaranteed to be empty. To avoid that they will fill your file system you should remove them after the ricc2 calculation is finished.

Another difference to the parallel HF and DFT (gradient) programs is that ricc2 will communicate much larger amounts of data between the compute nodes. With a fast network interconnection (Gigabit or better) this should not cause any problems, but with slow networks the communication might become the limiting factor for performance or overloading the system. If this happens the program can be put into an alternative mode where the communication of integral intermediates is replaced by a reevaluation of the intermediates (at the expense of a larger operation count) wherever this is feasible. Add for this in the control the following data group:

$mpi_param
  min_comm


next up previous contents index
Next: Spin-component scaling approaches (SCS/SOS) Up: Second-Order Approximate Coupled-Cluster (CC2) Previous: Transition Moments   Contents   Index
TURBOMOLE