10.6 Parallel RI-MP2 and RI-CC2 Calculations

All functionalities of the ricc2 program are available in the OpenMP-based parallel version for shared memory (SMP) architectures. Most functionalities of the ricc2 program are also parallelized for distributed memory architectures (e.g. clusters of Linux boxes) based on the message passing interface (MPI) standard.

While in general the parallel execution of ricc2 works similar to that of other parallelized TURBOMOLE modules as e.g. dscf and grad, there are some important difference concerning in particular the handling of the large scratch files needed for RI-CC2 (or RI-MP2). As the parallel version dscf also the parallel version of ricc2 assumes that the program is started in a directory which is readable (and writeable) on all compute nodes under the same path (e.g. a NFS directory). The directory must contain all input files and will at the end of a calculation contain all output files. Large scratch files (e.g. for integral intermediates) will be placed under the path specified in the control file with $tmpdir (see Section 20.2.19) which should point to a directory in a file system with a good performance. All large files will be placed on the nodes in these file systems. (The local file system must have the same name on all nodes.) Note that at the end of a ricc2 run the scratch directories specified with $tmpdir are not guaranteed to be empty. To avoid that they will fill your file system you should remove them after the ricc2 calculation is finished.

Another difference to the parallel HF and DFT (gradient) programs is that ricc2 will communicate much larger amounts of data between the compute nodes. With a fast network interconnection (Gigabit or better) this should not cause any problems, but with slow networks the communication might become the limiting factor for performance or overloading the system. If this happens the program can be put into an alternative mode where the communication of integral intermediates is replaced by a reevaluation of the intermediates (at the expense of a larger operation count) wherever this is feasible. Add for this in the control the following data group:

$mpi_param  
  min_comm