Questions from a Fortran HPC Webinar

I strongly suggest to further use the term ‘Coarray Fortran’, especially in the title of papers, also to find appropriate results when people are searching for coarray programming.

Several questions of section B appear to be based on wrong assumptions about the Fortran language; Something is already going wrong here.

To describe the Fortran programming language to others, we should be open-minded and honest: From a programmer’s point of view, Coarray Fortran is a completely different type of programming language.

I personally think the section B is very good, those are excellent questions.

Fortran is a parallel language. Coarrays are intrinsically part of Fortran, so I think we should just use the term Fortran. We should add more parallel features and improve the existing features into Fortran. Such as better task base parallelism, and some slight improvements as needed to run well on GPUs.

In some sense, saying that Coarrays Fortran is a completely different type of programming language would be like saying that Fortran 90 is a completely different type of programming language compared to F77, which it is from many perspectives, and it is also true that they changed the name from FORTRAN to Fortran; but at the end of the day it is still Fortran and F77 is a subset, so are coarrays a subset of Fortran.

I recommend to just call it Fortran.

1 Like

The distinction is the different run-time we are programming for:

Most programming languages, including any Fortran standard so far (including Fortran 2008/2018 of course) aim to program for an underlying sequential run-time.

PGAS languages aim to program for an underlying parallel run-time, and this from start till end of execution.

The point with Fortran 2018 is that it describes both in a single language, programming for the sequential as well as the parallel run-time. And this works perfectly, IMO.

Nevertheless, following the rules of coarray programming, “…coarray syntax should appear only in isolated parts of the source code.” [Modern Fortran explained, section 17.1]. We can (and should) follow that rule easily, but the resulting parallel codes may look very similar to the sequential codes in Fortran 2018, since we don’t have any coarrays with them any more. Moreover, we will be confronted with both sequential as well as parallel codes in the same programming project and with the same language (Fortran 2018). If these codes look very similar we must find a way to distinguish between them, calling them altogether Fortran (2018) is just to plain. To distinguish between these codes in Fortran 2018, I am using the term Coarray Fortran to name the parallel codes.

Taking that view a step further, the different meanings of similar looking Coarray Fortran codes and serial Fortran codes seem just too extreme to call this a single programming language from a programmers view. Of course, the definition remains that of a single Fortran programming language.

I can understand if others can’t follow with me yet, but I mean what I say.

Michael Siehl

@Federchen I think calling the parts serial and parallel, or serial and coarray based is fine. Do you want to join our next Fortran call?

Fortran Monthly Call: July 2021

I would love to discuss it further.

1 Like

The discussion about serial vs parallel Fortran reminds me strongly of an item from Effective C++: 55 Specific Ways to Improve Your Programs and Designs, Third Edition, By Scott Meyers:

Item 1: View C++ as a federation of languages .

Today’s C++ is a multiparadigm programming language, one supporting a combination of procedural, object-oriented, functional, generic, and metaprogramming features. This power and flexibility make C++ a tool without equal, but can also cause some confusion. All the “proper usage” rules seem to have exceptions. How are we to make sense of such a language?

The easiest way is to view C++ not as a single language but as a federation of related languages. Within a particular sublanguage, the rules tend to be simple, straightforward, and easy to remember. When you move from one sublanguage to another, however, the rules may change. To make sense of C++, you have to recognize its primary sublanguages. Fortunately, there are only four:

  • C . Way down deep, C++ is still based on C. Blocks, statements, the preprocessor, built-in data types, arrays, pointers, etc., all come from C. In many cases, C++ offers approaches to problems that are superior to their C counterparts (e.g., see Items 2 (alternatives to the preprocessor) and 13 (using objects to manage resources)), but when you find yourself working with the C part of C++, the rules for effective programming reflect C’s more limited scope: no templates, no exceptions, no overloading, etc.
  • Object-Oriented C++ . This part of C++ is what C with Classes was all about: classes (including constructors and destructors), encapsulation, inheritance, polymorphism, virtual functions (dynamic binding), etc. This is the part of C++ to which the classic rules for object-oriented design most directly apply.
  • Template C++ . This is the generic programming part of C++, the one that most programmers have the least experience with. Template considerations pervade C++, and it’s not uncommon for rules of good programming to include special template-only clauses (e.g., see Item 46 on facilitating type conversions in calls to template functions). In fact, templates are so powerful, they give rise to a completely new programming paradigm, template metaprogramming (TMP). Item 48 provides an overview of TMP, but unless you’re a hard-core template junkie, you need not worry about it. The rules for TMP rarely interact with mainstream C++ programming.
  • The STL . The STL is a template library, of course, but it’s a very special template library. Its conventions regarding containers, iterators, algorithms, and function objects mesh beautifully, but templates and libraries can be built around other ideas, too. The STL has particular ways of doing things, and when you’re working with the STL, you need to be sure to follow its conventions.

Disclaimer: text copied from here.

Since Fortran supports several paradigms (similar to C++) i.e. procedural, array-oriented, object-oriented, parallel programming (PGAS)… it can be helpful to step back while designing code and figure out what level/paradigm is needed and then leverage the most-suitable language constructs.


@certik Thanks very much for that invitation, but for me the way to go yet is to present and explain the distributed objects model at a GitHub repository by providing a simple overview, a simple (generic) code example, and a paper to describe the model in more detail. The code example and the paper are already finished as preview versions, but both still need further revision before I can upload them. I need to start with the simple overview to provide simpler access to the topics.

While the codes became simpler and simpler with each revision, the distributed objects model became more and more sophisticated:

  • Using OOP techniques to implement parallel models;

  • Data transfers through coarrays as well as the required synchronization are both together encapsulated at a low-level layer thus, the programmer is not required to declare coarrays or to synchronize the data transfers;

  • Fault tolerant execution for the parallel logic codes;

  • Distributed objects are naturally implemented as collections of distributed objects (high levels of parallelism at scale);

  • Distributed arrays (coarrays) remain as the underlying data structures and lead to concurrently running computations on both, on the distributed array data as a whole (collected data) as well as on the distributed parts;

  • Proper compile-time analysis for the parallel logic codes using OpenCoarrays/gfortran (compile-time analysis does treat the parallel codes as OOP codes, which perfectly prevents from getting ICE’s using OpenCoarrays).

The Fortran syntax is mainly standard OOP syntax, but the execution model is far away from anything I could find elsewhere: Without description (and without access to the low-level layer) it is close to impossible to figure out what is going on there.

The distributed objects model that I did implement using Fortran is an extension of the fragmented objects model.

I will inform you here in that forum when I start with the uploads to GitHub.

Michael Siehl

In fact, the topic of distributed objects appears to be heavily related to C++ as well: UPC++ is another PGAS implementation supporting distributed objects and seems to be heavily based on using RPC’s for controlling the execution flow of a parallel App. Distributed Objects there are completely different from the Fragmented Objects model that I am doing with Coarray Fortran. I can’t tell if it is a good idea to implement distributed objects based on RPC’s, some say it’s not.

The other PGAS implemtation for C++, Coarray C++ (Cray), could directly profit from any good progress made within our Coarray Fortran programming.


Thank you all for these in-depth answers.

I would like to clarify the original intent of the webinar to keep this discussion on track: in one sentence “What a Fortran programmer should know today to have performance, especially with the next-gen HPC ressources?”

If, as a candid, I were to draft a global answer to this question , this is would be my tentative :

First we split the usecases: “I want to write Fortran to run on…”

  1. one core in my laptop (serial, no parallelization or GPU
  2. all the CPU cores of my workstation
  3. all the CPU and GPU of my workstation
  4. the CPUs of an HPC cluster
  5. the CPU and GPUs of an HPC hybrid cluster (and recently, most of the power of Supercomputers are delivered by the GPUs, especially for a better energy efficiency - and due to the hype of A.I. too…)

The first level is for reference, and is obviously only limited by the algorithm. The audience of the webinar came for level 5.

For levels 2 and 3, this is the “Shared memory” turf: There you can add OpenMP to your Fortran, but also do everything within the Fortran standard, using the Fortran-Co-arrays.

For level 4: memory is not shared anymore. In addition to parallelism you have to deal with the memory placement. Data exchanged must fit in the buffer, too small you have too many messages, and too large you’re swapping. This is essentially done through MPI libraries.

For level 5: Recent computers have several levels of shared memories with different speeds on each CPU/GPU nodes. Libraries such as OpenACC allows you to “port” then “tune” your parallel code to a GPU/CPU machine with time and sweat.

Again, the audience wanted to write Fortran statements (In my lab, researchers say can write Fortran without a computer scientist, in opposition to C++, highly subjective but we write for humans, not computers); to run on the future HPC ressources. So, what is the option that will still be included in compilers and optimized (i. e. competitive with a C++ equivalent) for the next decade on the next computers?



This is an excellent question but the answer should be given by an HPC vendor. They control the hardware tradeoffs and the software stack.

I think the question is broader than just for an HPC vendor. To answer the question I think the HPC vendors, the compiler vendors (open source and commercial), the hardware vendors, the Fortran users and the Standard Committee should all collaborate to figure out a vision and path forward for Fortran.

1 Like

All apologies, my last question is a deadend. (a unique vision and path forward with this many actors is probably an utopia).

Same player shoot again, lets get practical:
What are the presents options to write Fortran Code that are serious contenders for the next decade on the next HPC computers.