MPI support in the build system

This document is only valid for ABINIT ≥ 5.3.4.

Providing a unified interface to MPI support is a very delicate task, as there is currently no real standard neither for implementation nor for installation. Up to ABINIT 5.3, the interface to MPI support in the build system was a little bit confusing, and was permanently undergoing a lot of changes. It has now been clarified, and will keep stable at least for the lifetime of ABINIT 5.4.

The "--enable-mpi" option to the "configure" script is controlling all the others:

If "--enable-mpi" is set to "yes", the parallel code will be built only if a usable MPI implementation can be detected. First the build system will check whether the Fortran compiler is able to build MPI source code natively, i.e. without any additional parameters. If not, the "--with-mpi-prefix" option must be specified, and point to the top directory of a MPI installation. All other "--with-mpi-*" options will be ignored. The build system will then try to find executable "mpif90" and "mpirun" binaries in the MPI "bin/" subdirectory, and perform very basic checks. If something goes wrong, the build of parallel code will be automatically disabled.

If "--enable-mpi" is set to "manual", the "--with-mpi-prefix" option will be ignored, contrary to all the other ones. In this case, specifying correct build parameters is under the responsibility of the user, since auto-detection is turned off. In most situations it means feeding the compiler with suitable include flags through the "--with-mpi-fcflags" option, as well as relevant link flags through "--with-mpi-fc-ldflags".

Additional levels of parallelisation may be activated, though they are still experimental and meant to be used by developers only:

You will find a detailed description of all these options in the source package of ABINIT, within the MPI support section of the "~abinit/doc/config/build-config.ac" template.

Do not hesitate to contact Yann Pouillon for any problem, question, or comment.