For Fortran to stay relevant, it needs continued representation on committees that develop parallel programming standards such as MPI. At a 2017 MPI Symposium, Rolf Rabenseifner gave a presentation From MPI-1.1 to MPI-3.1, publishing and teaching, with a special focus on MPI-3 shared memory and the Fortran nightmare. Here are slides 14-17:
Fortran, a nightmare ?!?
• Only a few MPI Forum members speak Fortran – The few ones had a hard job to get MPI and Fortran consistent
• Major problems: Compiler optimizations may lead to wrong MPI
execution – with all MPI_Wait/Test routines – with using MPI_BOTTOM together with derived datatypes – with absolute addresses – calling nonblocking routines with strided data arrays that are not
simple contiguous
• Already in MPI-2.0 (1997!) the inconsistency-problem was known – but more than some text about a user-writte[n] “dd” dummy routine as a work-around was not going through the Forum!
Fortran, a nightmare – solved in MPI-3.0 (15 years later) ?!?
• For MPI-3.0 we received full service from the Fortran
standardization body by “Fortran Technical Specification TS 29113” – Enabling the new Fortran module mpi_f08
• which is the first time full consistent with the Fortran standard – Major solution:
Fortran extended the ASYNCHRONOUS keyword for any asynchronous
use-case, including MPI nonblockings and MPI_BOTTOM
• In MPI-3.0 we did the backend wrong – my apologies – A whole section in an errata MPI-3.1 – Did really slowed down the implementation – Still some MPI implementations claim to be MPI-3.1 compliant
• although they do not provide compile-time argument checking
• nor name based argument list with the mpi module
Teaching complete advanced MPI-3.1
• Important for users
can take advantages – from all the work in
the MPI Forum, and – from the implementions of all the new
MPI features in many
MPI libraries
• My MPI-3.1 course is
based on the MPI-1.1
course from EPCC – They did a great job!
25 Years of MPI
• Nonblocking collectives
• The New Fortran Module mpi_f08
• Groups & Communicators, Environment Management
o MPI_Comm_split, intra- & inter-communicators
o Re-numbering on a cluster, collective communication on
inter-communicators, info object, naming & attribute
caching, implementation information
• Virtual topologies
o including neighborhood communication +MPI_BOTTOM
• One-sided Communication
• Shared Memory One-sided Communication
o including hybrid MPI and MPI-3 shared memory
programming
o MPI memory models and synchronization rules
• Derived datatypes
o including advanced features, alignment, resizing
• Parallel File I/O
• MPI and Threads, e.g., hybrid MPI and OpenMP
• Probe, Persistent Requests, Cancel
• Process Creation and Management
Rabenseifner wrote a 720-page tutorial Introduction to the Message Passing Interface (MPI) (2023) and has given a course on Parallel programming with MPI/OpenMP.