Yes, FORTRAN66 / FORTRAN77 are outdated (have been so for nearly half a century). One can minimize the effort and port the code to modern Fortran or spend considerable resources to open a pandora’s box of new language/developer bugs to add to the codebase unless the original codebase is itself utterly worthless, which appears to be the case here, according to the Julia creator Alan Edelman. So, in the end, it’s people’s preference to start a new project in any language they find comfortable. It has nothing to do with Fortran. But it certainly makes the story more interesting to portray Fortran as an alien monster that no student can learn. I have asked people the reasons, and the typical response is: “oh, we are talking about FORTRAN77”.
As a humble Fortran user, I want to take advantage of this opportunity to remind the Fortran standard committee of the importance of adding preprocessing (metaprogramming) capabilities to the language. Based on my limited knowledge of Julia and extensive knowledge of Fortran compiler preprocessors, Julia macros simply appear to be glorified standardized preprocessing tools of Fortran compilers. Preprocessing capabilities do not introduce lousy coding habits by themselves, but the combination of Fortran with a non-standardized Fortran compiler preprocessor can. The latter is what we have right now. I may be naive, but I cannot comprehend the strong resistance against standardizing a subset of preprocessing capabilities in Fortran. I can provide many instances of the preprocessor use-cases, why they are essential, and why and when they are not bad-coding habits if standardized. Standardizing a subset of preprocessing capabilities may need some deliberation. But implementing them should be little to no burden for the compiler developers since all major compilers already provide extensive preprocessing capabilities.
Why is the open source Fortran willing to be held hostage by the standard? I understand that standards have advantages, but ISO seems actively harmful to the progress of the language.
Is it really true that there’s a strong resistance against metaprogramming in the standard committee? If it is I find that to be very sad for Fortran as a language.
As you say, metaprogramming is not bad programming habits. It’s a tool for building abstractions, preferably zero cost abstractions, in a codebase. It makes development more efficient and less error prone. This is crucial in modern software development at scale.
I found no mention of Edelman in the article, so I don’t see how this is relevant.
Finding students willing to work with Fortran may well be a real problem. Recently, I was in touch with a few developers of SeisSol, a code for seismic wave propagation, and recipient of the ACM Gordon Bell Prize in 2014 for performing the largest earthquake simulation at the time. SeisSol is a hybrid-code written in C++ and Fortran but the developers are working to phase out Fortran, since they aren’t able to find new students interested in working with the Fortran parts of the code.
Concerning the following issue,
…he’s realized that “traditional climate models are in a language [MIT] students can’t even read.”
my first reaction mirrored some of the commentators at the original site who pointed out that students who know Julia (or any other programming language) should be able to work their way through some Fortran code. But coming to think of F77, I guess it’s really not that easy. Things such as arithmetic if’s, computed go to’s, cryptic format statements, lack of dynamic arrays, etc. can make legacy Fortran code quite hard to read for the untrained eye. Consider the refactoring problem I posted recently, where I was thrown off track by an astray
if construct, which was there only as a safety measure against F66 trip count semantics. As @everythingfunctional said in that thread, it’s not just about refurbishing, but also about knowledge recovery.
Rewriting a complex model from scratch (potentially in a different language) can hold many benefits 1) uncover bugs in the original, 2) use better algorithms, language constructs, or simply programming practices, 3) improve performance on new hardware, 4) serve as verification of the original model and hence provide further trust in the results obtained to this day, and so forth. If we care about reproducible computational science, doing a full rewrite is very welcome IMO. At the end of the day, it’s also about the model, and not just which programming language was used to express it.
I would be interested in seeing this, perhaps in a separate thread.
I agree that students may not want to work on Fortran 77 or 66 code, but there are many tools, commercial and free, for modernizing such codes. There is no mention of the authors trying to modernize the Fortran code, which makes me suspect that they simply wanted to use Julia.
Why isn’t the official Lapack, arguably the most important Fortran library, still in fixed source form without argument intents? Because the people behind it are more interested in other languages.
What a terrible article. It’s so poorly written and misleading that I recommend not sharing things like this without carefully reading them first.
Climate model code is so outdated, MIT starts from scratch
Climate model code in general? All climate model code? False. Some parts of some climate models, sure.
Julia replaces Fortran as the basis for Earth’s new digital twin
False. The project described in the article is CLiMA. CLiMA is orthogonal to existing climate, weather, and ocean modeling projects. It does not replace any single project. And Julia is orthogonal to Fortran, it does not replace it. They both have their uses.
When faced with climate models coded in Fortran in the 1960s and 70s, MIT decided there wasn’t any more cobbling together left for the ancient code, so they decided to toss it out and start fresh.
False. Most code of most climate models is written post 1990s. Yes, there are legacy pieces here and there in some models that were written in the 80’s, and possibly some in the 70’s. Any of this legacy code is nothing that anybody has to cobble through. It’s code that works. Yes, it’s probably not the code that most easily runs on modern GPUs, but that’s a separate issue.
The goal of this grand challenge is to provide accurate and actionable scientific information to decision-makers to inform the most effective mitigation and adaptation strategies.
Indeed, this seems an accurate description of Raffaele’s and Noelle’s new project. This description in itself has nothing to do with Julia or Fortran. It’s about effectively disseminating simulation output in a form that is actionable for end-users.
Students can’t read model code
False. They can. I personally know many students who could read model code. In my 15 years in this field I haven’t met a student that couldn’t eventually work through some existing code, even legacy Fortran. This was at lower than top-tier universities. I’m quite confident that MIT students will do well reading and understanding existing code. Yes, you could argue that I’m taking this statement literally, and that some code is more difficult to read than others, but when written like this I have no choice but take it as written.
That Fortran is less readable than Julia is not even the cause of the problem. To me personally, modern Fortran is not any less readable than Julia, and in some cases Julia can be less readable. The difficult-to-read code comes about when application-domain scientists write code and contribute it to the codebase. This code is often not optimized for readability or maintainability. So, when a project like CLiMA becomes successful enough to attract contributions from application-domain scientists, there will be an opportunity for the CLiMA team to enforce certain standards when merging contributor code into the main codebase. However, this advantage comes due to the fact that the project started in a time and environment of better software development practices than it was possible a few decades ago. It’s not as much a property of the language. Modern Fortran climate and weather projects use much better software development practices compared to back when they started.
CLiMA made the determination that old climate models, many of which were built 50 years ago and coded in Fortran, had to go if there was going to be any progress toward better climate models.
False. Most climate models used today are much more recent (< 20 years). And, there’s abundant existing evidence of continuous progress toward better climate models.
I could go on.
This is strictly a criticism of the linked article. It is not criticism of Julia, CLiMA, Raffaele and Noelle, all of whom are doing great work and whom I regard highly.
Thanks @alozada but ouch really with what it quotes, comments such as (emphasis mine)
- “The majority of existing climate models run on Fortran, a programming language created in the late 1950s that is unfamiliar to most people under 30. “I’m glad I can finally stop using my grandfather’s programming language,” says Ali Ramadhan, an EAPS doctoral student on the CliMA team.”
My father (in his 70s) is still a very active applied mathematician, and he stills programs in fixed-format Fortran77, with lots of
gotos. Legacy lives in people, sometimes independently on how the language evolves .
That’s wonderful. Invite your dad to the Fortran Discourse.
Several of his optimization codes, such as Trust Region Derivative-Free algorithm (TRDF), have now been added to my list of Fortran codes on GitHub. I would suggest adding topics such as numerical-optimization, constrained-optimization to the repos to make them to easier to find.
Kudos to your father! How lucky that Nature did not choose any of MIT know-it-alls like Mr. Ramadhan as the reviewer
The most important programmer of their group uses modern Fortran, and advocates for it. (those specific codes you linked are of a former student).
Consider the simplest scenario of writing runtime checks for algorithms. One can replicate the runtime
block; ... end block checks that only differ in assertion and message, in the 10000 procedures, or one could write a preprocessor macro that takes two arguments
(assertion, msg). The >5-line code becomes a single line with the preprocessor macro, saving 4 * 10000 lines of codes and an inordinate amount of developer’s time while also reducing the opportunities to introduce bugs.
This is just a simple example. I can go on with more sophisticated scenarios.
Simply standardizing the existing preprocessing facilities is not the solution. Ideally, FPP macros could be like functions implemented in modules to be
We began a project some time ago with the goal of strict Fortran-standard compliance. The first standard restriction that we abandoned was the maximum line length of 132 characters in the source code (other programming languages are not any better). Today, the codebase contains over 32000 preprocessing fences for about 800 unique functionalities. That is 40 FPP fences per functionality, on average. Ideally, the number 32000 could have been less than 100. To be clear, this is the situation in any language that does not have standard preprocessing/metaprogramming capabilities, not just Fortran. What I find disappointing is that I occasionally hear this statement from some standard committee members as an excuse to evade the discussion and development of FPP facilities in Fortran. I find an old quote from FortranFan’s comments quite relevant to such discussions that surface every once in a while in this forum:
The way language enhancements work is like this , it entirely depends on who makes the requests and how otherwise, it’s “no enhancements for you”.
The end-user cannot do everything by themselves, from developing doc-tools, debugging tools, to fixing compiler bugs and implementing new features (as we frequently hear in this forum) or finding creative ways of bypassing compiler bugs or the standard limits (because the committee is not interested in the topic). I even volunteered recently to help with the development of the PDTs in gfortran in another forum. No one from GNU volunteer developers even bothered to respond to my inquiry or say “no, we do not need your help”.
Alan talks about this project in SC19. He is also the person I was referred to get more information about this project when talking to people in the Julia community.
The argument that students cannot read something is saying much about the students, but also about the way we teach them applied programming. I freak out every time I hear arguments like “Oh, I use Julia/Python/whatever because it has all the numerical methods implemented in.” At least in the fields that heavily depend on numerical computations, students with such an attitude are completely useless. I don’t know whom to blame for this. Perhaps the educational system, perhaps the age in which we live - the new kids are used to look at images much more than to deal with actual numbers.
Old Fortran codes should be evaluated and modernized, but rewriting them in a different language for the sake of readability is insane.
Fortran #31 in Tiobe
I learnt f77 for my PhD, when f95 was already in the market, just because my advisor told me so, and just because it was the language he used. I still find some aritmethic if’s from time to time in old pieces of code.
Now I am updating myself to f2018. After learning some other languages, I think it is simply the best language to do numerics (on equal footing with C/C++).
Focus should be put on efficiency. That is where fortran is unbeatable. I doubt very much that julia can do any better than fortran on the same hardware.
Not just efficiency, but also expressivity. 90% of development efforts is writing and debugging code, and Preprocessing/metaprogramming capabilities are essential for developing generic libraries. If I remember Steve Lionel’s words correctly the Fortran committee tried to add such features to the language in the 1990s. It’s a pity it was not taken seriously and soon forgotten in the 2000s. The argument is that compilers have FPP facilities. But relying on compilers is neither good nor enough to solve the problem.
To clarify, Fortran is still among the best and most expressive languages for numerical computing, based on my user and teaching experience with half a dozen programming languages. But there is no guarantee to remain so if it relies too much on past achievements and continues with significant inertia against new enhancements.
With “expresivity” do you mean readability?
I completely agree, but this is also related with good programming habits and self-discipline.
In Python, the identation is part of the program. That is not the case of fortran.