I am in the process of writing some small tutorials for fpm-docs using MPI and I was surprised when I saw that fpm metapackages can only use mpif.h, which was deprecated in MPI-4.1.
Is there a benefit of using mpif.h as opposed to use mpi or even better the more modern use mpi_f08?
If not should fpm focus on supporting the modern module approaches? I would be interested to understand the complexities fpm searching for MPI modules instead.
I see now. I believe you are not seeing module support because the module capability is implemented on trunk, and has not been released to an official release yet, so if you’re using 0.9.0, it does not work.
mpi="*" literally means “use any default versions currently available”, why should restrict that to the newest versions only? I believe the choice to maximize platform and version compatibility is the reasonable one for the default case.
Semantic versioning was supposed to be coming up in fpm - that was(/is) the mechanism that should have made these restrictions possible
Little/no community feedback (maybe it’s not much used yet, or instead “no news good news”)
Bottomline comment, having mpif.h support is very useful to my use cases, as my customers use Engineering software that runs either on embarassingly old Unix clusters with ancient compilers, or on MS Windows machines.
It is important to remember that due to the incompatibility of module formats across all Fortran compilers, you can only use the MPI modules that were built with the same compiler you are using to build the code that USEs the mpi or mpi_f08 (ie you can’t use a module built with gfortran with a code built with intel etc). Given how important MPI is in almost all parallel codes that run on HPC hardware, this use case alone should be enough of a reason for the standards committee to define a transportable module format that can work with all compilers irrespective of vendor.
Thank you very much Matt. Your example helped me progress to the stage where I am able to do successfully fpm build with MPI.
My next challenge towards making a program that uses the MUMPS solver is to provide pointer arrays to MUMPS (and managing their lifecycle). My equation matrix and rhs vector are currently in the form of allocatable arrays. Mumps expects pointers to these arrays.
Do you have an example for managing pointer arrays within the MPI lifecycle stages (i.e. mpi_init() and mpi_finalize())?
I went down that rabbit hole yesterday night and right now the best thing out there seems to be Jeff Hammond’s apps that create an interface, see vappa and havaita. From the posts I read through yesterday it seems we need more Fortran MPI programmers!
Having a true compiler-independent Fortran MPI implementation via C interoperability is the only safe way to go imho, nice to see that @JeffH did that for all! Definitely looking forward to dig more into it.
Future MPI standards should go the extra mile, drop all the non-portable module stuff and just embed that MPI_F08 implementation as part of the Standard instead.
In fpm we were constrained for compatibility reasons (on MS Windows) to not enforce any requirements on the modules nor to enforce any TKR checking. So the mpi meta-package just provides all correct building, linking and running commands, but makes no assumptions on the availability of the mpi modules.
Working with automotive OEMs, I see that the non-tech world still uses a lot of ancient computing stuff (think paid RHEL subscriptions that ship gfortran 4.9), and a lot of engineering modelling is done on Windows. So there is no way any of the modern MPI features for Fortran can be used without also providing a fallback to the ancient stuff.
Indeed it is! I played with it today and added a way to build with CMake since I am planning on using it for a CMake project. Probably will try to extend the testing!
This is painful to learn and be aware of I’ve been wanting to set minimum compiler requirements for an app I develop to be at least gcc 11 haha.
Indeed vapaa looks great but don’t you still have the same problem with module incompatiblity that you do with an MPI distributions mpi and mpi_f08 modules, ie. you just can’t take a vapaa library compiled with gfortran and access the resulting modules with ifx. You are still forced to build compiler specific versions of vapaa. The big advantage of vapaa is that you don’t have to build the rest of MPI (the C/C++ part) for each compiler (assuming the Fortran compiler’s C interop will work with the underlying MPI code built with a C compiler from another vendor.)
The MPI C ABI effort is on track for release in June 2025, at which point, we will be able to do VAPAA in an implementation-agnostic way, which will work with both Open MPI and MPICH based MPI libraries without recompilation.
MPI_F08 is part of the standard already, and that’s part of the problem. Doing Fortran MPI support without constraining to the standard would make things better.
That’s of course implementable in the MPI module, which I intend to add to VAPAA later. I can make it configurable, so it can utilize F08 features when the user wants, or do the bare minimum that is highly portable but unsafe from the perspective of compiler checking.
When VAPAA supports the MPI module, it should able to work with a pure F90 compiler, if somebody needs to use an obsolete compiler.
I’m not sure when I’ll make time for all of this. I’m not sure VAPAA will ever be feature-complete w.r.t. the standard API, but I want it to support all the common stuff well, in both MPI_F08 and MPI modules.
That will be a great time! I know it may sound reductive for Fortran, but having a clean Fortran interface to stable C code, that lets the C side wrangle all macro and build-time cryptics, is a great strength.
Does the standard mandate deployment of the .mod file? In other words, could a future standard, without breaking previous rules, just add an implementation of said module? Just thinking aloud, but if a C ABI is mandated, then, the MPI_F08 module source could become relatively close to a C-Fortran interface source (under the assumption that all Fortran types become bind(C) to the corresponding C structures)?
This would be great! With some preprocessing, I bet it would be possible to overcome missing dimension(..) functionality.
We do not specify an implementation, and this will not change. It is possible that at some point in the future, we will specify an implementation of Fortran as a side document, but that would likely go hand-in-hand with removing the Fortran API from the standard, which I do not foresee happening.
The MPI module does not specify support for subarrays and thus does not need type(*), dimension(..). For MPI_F08, it is optional.