Fortran in the TIOBE Top 10

Thanks @FortranFan, @MarDie, @nja. Almost all of these can be done.

The interoperability of a C++ class and a Fortran (and Python) class can be done via our 3 frontends (LFortran, LC, LPython), which all use the same intermediate representation (ASR) and represent classes (and arrays and other features) in identical ways. LFortran is a superset of Fortran, while LC/LPython are fundamentally a subset of C++ and Python. But eventually it should be possible to support quite a large and usable subset (it’s already usable). So in practice you would have to compile the whole file with the interoprable C++ class with LC, and interact with the rest of the C++ codebase via our C++ interop. Same with LPython. I think it could work quite smoothly in practice.

5 Likes

One thing affecting the continued use of Fortran is the perception that it is not needed because of the availability of scientific and technical software in C++. I worked for a computing applications group at the Oak Ridge National Laboratory from 1978 through 2000. We were composed mostly of engineers and scientists, with a few mathematicians and programmers thrown in. The subgroup I was in developed mostly nuclear engineering applications, with some code having been written originally in the 1960s. Almost all was in Fortran, with a little C and assembler thrown in to handle specific hardware. We had two major code suites. The AMPX system prepared nuclear cross section libraries. The SCALE system performed nuclear safety analyses (criticality safety and shielding) for spent fuel shipping cases, fuel storage pools, simple fuel assembly models, etc. In the1990s all the codes were converted to Fortran 90/95, which was needed, but the use of automated conversion tools for an initial cut broke some code. In the late 1990s when a large number of the original developers were retiring or moving on to other jobs, they hired a number of new people including a few formally trained software engineers. Since there was little to no training in Fortran at universities in those days, the new software engineers began a push to convert all the codes to C++ because they had little or no experience in Fortran, and they were allowed to exert more influence than the engineers and scientists. Some of what they brought was useful such as source code control, standardized make procedures (e.g., Cmake), and daily automated running of test suites for changed code. But a lot of the conversion from Fortran to a mixture of Fortran and C++ broke a lot of working code, requiring more debugging and testing than would have been necessary otherwise. I’m not against using C, C++, Python, etc. for new code and for converting a few old routines where it would obviously improve things. However, a lot of the language changes seemed to be change for change’s sake. In the 2000s the laboratory became heavily involved in supercomputing and moving code to these new systems complicated all of the above. I wasn’t there for any of that, so I can’t comment more. I have heard from the few people I still know there that the push to convert most of these codes completely to C++ is continuing. I’m not up to date on modern Fortran. My skill set still remains in the era of Fortran 77. I have adopted some Fortran 90/95 constructs but most of what I do in semi-retirement doesn’t require the newer features. But I know that modern Fortran contains many of the features of C++ so I assume it can and should remain viable for the development of new technical software. There has also been a push to combine functions that were handled with a number of smaller codes into huge monolithic codes that handle several functions. Personally, I like the concept of keeping software in smaller units to make understanding and maintenance easier, sort of like the Unix philosophy.

6 Likes

Modern Fortran (i.e. Coarray Fortran) is explicitly tailored for supercomputers. Thanks to the AI-boom, supercomputers will become much more widespread in the next few years: Ultra Ethernet Rebuild AI Data Center Network Architecture | by Asterfusion data technologies | Medium .

A whole industry is involved into this shift already: https://ultraethernet.org/, and the first UEC specification will come in late 2024 or early 2025, see this preview from a few days earlier: https://ultraethernet.org/ultra-ethernet-specification-update/ .

Another big topic for the next years should become in-/near-memory computing through spatial devices.

Some of us, -i.e. Fortran programmers / application developers, as well as those who (already did) develop the Fortran standard, as well as those who implement compilers and libraries-, will use the second half of this decade to further build up Fortran for the 2030’s. Parallel programming can be really demanding for each of us and thus, we need simple and standard ways (i.e Fortran) to approach it and build it up in the next few years.

To allow for dynamic load-balancing I do already use single-image asynchronous task execution techniques (i.e. multiple simultaneous executing ‘threads’ with each process) in pure Fortran through some coreRMA/coarray technique, i.e. using Fortran’s network synchronization statement directly.

Using such programming with the Fortran 2018 standard is a bit cumbersome because we can handle distributed objects only as individual objects. The Fortran 2023 standard does already solve this issue by introducing the ability to use distributed objects as collections (arrays). I am already preparing my codes for this, the first supporting Fortran compiler is expected in late 2024 or early 2025.

I do not want to comment on the heavy problems that I see especially with Python but also with C++ for algorithm development in the 2030’s.

Regards

5 Likes

@rymanjc Nice historical perspective!

I never worked for the US National Labs, but my own experience from the early 90s onward agrees to a large degree with what you relate. The temporary demise of Fortran came largely from a CS community push due mainly to their lack of experience in Fortran, which fed into a switch to the more familiar C-family languages. This effort persists today even though much of the resulting C-ish code is less efficient, and often quite cryptic (disclaimer: I code a lot in C++ as well). I am optimistic for the future though.

Good points. An efficient built-in parallel API coupled with a good cache programming API are must haves nowadays.