Fpm MPI metapackage and mpi modules

I am in the process of writing some small tutorials for fpm-docs using MPI and I was surprised when I saw that fpm metapackages can only use mpif.h, which was deprecated in MPI-4.1.

Is there a benefit of using mpif.h as opposed to use mpi or even better the more modern use mpi_f08?
If not should fpm focus on supporting the modern module approaches? I would be interested to understand the complexities fpm searching for MPI modules instead.

1 Like

Giannis,

I see now. I believe you are not seeing module support because the module capability is implemented on trunk, and has not been released to an official release yet, so if you’re using 0.9.0, it does not work.

The MPI metapackage is fully compatible with useing F90+ style modules, and I agree it’s the thing to do in your examples.

Some comments on the way it’s been designed:

  • mpi="*" literally means “use any default versions currently available”, why should restrict that to the newest versions only? I believe the choice to maximize platform and version compatibility is the reasonable one for the default case.
  • Semantic versioning was supposed to be coming up in fpm - that was(/is) the mechanism that should have made these restrictions possible
  • The native library of one of the 3 major platforms (Microsoft MPI on Windows) does not support Fortran modules, but only mpif.h
  • Little/no community feedback (maybe it’s not much used yet, or instead “no news good news”)

Bottomline comment, having mpif.h support is very useful to my use cases, as my customers use Engineering software that runs either on embarassingly old Unix clusters with ancient compilers, or on MS Windows machines.

2 Likes

Awesome, thanks. I completely forgot that there was no release. Should we make a new release or is there something else that needs merging?

2 Likes

I think it would be nice to have a bugfix 0.9.1 release, there have been quite a few PRs since 0.9.0.

I would go v0.10.0 exactly because such cool features were added (and there are a few code changes).

1 Like

Ok then let’s draft it and try to get it deployed soon (like, next week?)

2 Likes

Any news on the small MPI tutorials?

I am trying to build a program using mumps and it appears that my knowledge of MPI and fpm+MPI is not deep enough to achieve that goal…

It is important to remember that due to the incompatibility of module formats across all Fortran compilers, you can only use the MPI modules that were built with the same compiler you are using to build the code that USEs the mpi or mpi_f08 (ie you can’t use a module built with gfortran with a code built with intel etc). Given how important MPI is in almost all parallel codes that run on HPC hardware, this use case alone should be enough of a reason for the standards committee to define a transportable module format that can work with all compilers irrespective of vendor.

1 Like

I’m also messing with MPI and FPM at the moment.

It’s working nicely; adding mpi="*" in dependencies, under fpm.toml is the only thing you need. Working example

Let me know if you can work it out.

1 Like

Thank you very much Matt. Your example helped me progress to the stage where I am able to do successfully fpm build with MPI.

My next challenge towards making a program that uses the MUMPS solver is to provide pointer arrays to MUMPS (and managing their lifecycle). My equation matrix and rhs vector are currently in the form of allocatable arrays. Mumps expects pointers to these arrays.

Do you have an example for managing pointer arrays within the MPI lifecycle stages (i.e. mpi_init() and mpi_finalize())?

Should I create a new thread for this question?

I can’t say I have insight on pointer arrays in MPI. I could see it becoming a problem for multiple ranks to create their own pointer arrays.

A new thread might help bring eyes to your question, yes.

I went down that rabbit hole yesterday night and right now the best thing out there seems to be Jeff Hammond’s apps that create an interface, see vappa and havaita. From the posts I read through yesterday it seems we need more Fortran MPI programmers!

1 Like

vapaa looks like a great library!

Having a true compiler-independent Fortran MPI implementation via C interoperability is the only safe way to go imho, nice to see that @JeffH did that for all! Definitely looking forward to dig more into it.

Future MPI standards should go the extra mile, drop all the non-portable module stuff and just embed that MPI_F08 implementation as part of the Standard instead.

In fpm we were constrained for compatibility reasons (on MS Windows) to not enforce any requirements on the modules nor to enforce any TKR checking. So the mpi meta-package just provides all correct building, linking and running commands, but makes no assumptions on the availability of the mpi modules.

Working with automotive OEMs, I see that the non-tech world still uses a lot of ancient computing stuff (think paid RHEL subscriptions that ship gfortran 4.9), and a lot of engineering modelling is done on Windows. So there is no way any of the modern MPI features for Fortran can be used without also providing a fallback to the ancient stuff.

1 Like

Indeed it is! I played with it today and added a way to build with CMake since I am planning on using it for a CMake project. Probably will try to extend the testing!

This is painful to learn and be aware of :frowning: I’ve been wanting to set minimum compiler requirements for an app I develop to be at least gcc 11 haha.

2 Likes

Indeed vapaa looks great but don’t you still have the same problem with module incompatiblity that you do with an MPI distributions mpi and mpi_f08 modules, ie. you just can’t take a vapaa library compiled with gfortran and access the resulting modules with ifx. You are still forced to build compiler specific versions of vapaa. The big advantage of vapaa is that you don’t have to build the rest of MPI (the C/C++ part) for each compiler (assuming the Fortran compiler’s C interop will work with the underlying MPI code built with a C compiler from another vendor.)

The MPI C ABI effort is on track for release in June 2025, at which point, we will be able to do VAPAA in an implementation-agnostic way, which will work with both Open MPI and MPICH based MPI libraries without recompilation.

MPI_F08 is part of the standard already, and that’s part of the problem. Doing Fortran MPI support without constraining to the standard would make things better.

That’s of course implementable in the MPI module, which I intend to add to VAPAA later. I can make it configurable, so it can utilize F08 features when the user wants, or do the bare minimum that is highly portable but unsafe from the perspective of compiler checking.

When VAPAA supports the MPI module, it should able to work with a pure F90 compiler, if somebody needs to use an obsolete compiler.

I’m not sure when I’ll make time for all of this. I’m not sure VAPAA will ever be feature-complete w.r.t. the standard API, but I want it to support all the common stuff well, in both MPI_F08 and MPI modules.

4 Likes

Thanks for the quick response @JeffH -

That will be a great time! I know it may sound reductive for Fortran, but having a clean Fortran interface to stable C code, that lets the C side wrangle all macro and build-time cryptics, is a great strength.

Does the standard mandate deployment of the .mod file? In other words, could a future standard, without breaking previous rules, just add an implementation of said module? Just thinking aloud, but if a C ABI is mandated, then, the MPI_F08 module source could become relatively close to a C-Fortran interface source (under the assumption that all Fortran types become bind(C) to the corresponding C structures)?

This would be great! With some preprocessing, I bet it would be possible to overcome missing dimension(..) functionality.

We do not specify an implementation, and this will not change. It is possible that at some point in the future, we will specify an implementation of Fortran as a side document, but that would likely go hand-in-hand with removing the Fortran API from the standard, which I do not foresee happening.

The MPI module does not specify support for subarrays and thus does not need type(*), dimension(..). For MPI_F08, it is optional.

1 Like