Resistance to modernization

I think it really depends. Some algorithms may be easier to write from scratch with a new design, as long as the paper that introduced them and any later textbooks cover the methods in enough detail. But for many numerical algorithms out there it’s easier to start from some existing code, whatever shape it might be in. Additionally, you get to learn from the mistakes of the previous design.

Chris Rackauckas, one of the lead Julia developers, acknowledges this quite clearly for his differential equations package:

You will see at the end that DifferentialEquations.jl does offer pretty much everything from the other suite combined, but that’s no accident: our software organization came last and we used these suites as a guiding hand for how to design ours.)

As @oscardssmith says, algorithm development didn’t just stop once the classic “PACKS” were released. (And neither did hardware development and software practices; most Fortran ODE solvers on Netlib are not thread-safe, because they were written in an era when parallel computers were not as common as they are today).

2 Likes

This is a chicken-egg type problem. New features are less robust because they haven’t had the time to mature, or the users to test them under all kinds of circumstances.

I think Adam Tornhill provides a very balanced view on modernization in his article Why I Write Dirty Code: Code Quality in Context:

The perils of improving existing code

So far I’ve talked about the economics of knowing when – and when not – to invest in code quality. But there’s a different perspective too, and it’s an argument about correctness. You see, clean code that doesn’t work is rarely rewarded. And this is where it gets interesting. It turns out that there’s a correlation between old, stable code and correctness.

I covered this in Software Design X-Rays under the title Your Best Bug Fix Is Time. In that section, I reference research that compares modules of different age. That is, the time since their last modification. That research finds that a module which is a year older than a similar module has roughly one-third fewer defects.

Code stabilizes for different reasons. One might be that it’s dead code, which means it can be safely deleted (always a win). Or it might become stable because it just works and does its job. Sure,the code may still be hard to understand, but what is there is likely to have been debugged into functionality. It’s been battle tested and it just works. Using hotspots, you can ensure it stays that way.

This correctness perspective is the main reason why I would like to put the Boy Scout Rule into context too; should we really be boy scouting code in the long tail? Code that we haven’t worked on in a long time, and code that we might not touch again in the foreseeable future. I would at least balance the trade-offs between stabilizing code and refactoring it, and I tend to value stability higher. But again, it’s a contextual discussion that cannot – and shouldn’t – be taken as a general rule applicable to all code. All code isn’t equal.

Returning to the LAPACK lsame function which prompted this thread, it’s been stable for decades and we know it works. Exactly like Adam Tornhill describes, I’ve been boy-scouting in the long tail. However, the code also contains a bunch of dead code that’s only relevant to platforms which form a tiny (or even negligible) fraction of active computers. That tiny fraction could be serviced perfectly fine with a small upgrade to use the intrinsic function iachar formally introduced in F95, but mentioned at comp.lang.fortran already in 1989 as an item on the wishlist for Fortran 8X (later Fortran 90 I assume). According to another comp.lang.fortran thread, iachar was available in 11 out of 14 F77 compilers tested in 1994. I’m not sure why it wasn’t used in LAPACK which was first released in 1992.

The oldest version of LAPACK I could dig out is version 2.0.0 dating to 1997, available through the Internet Archive. The lsame code has been practically unaltered since then.

1 Like

I agree that it is useful to learn from existing programs. And I also noticed that the definition of “legacy” codes and “modernization” might be somewhat different from what I was imagining…

Specifically, I was imagining something like modernizing the following (pretty large) quantum chemistry program suite:

Although this program is very solid, well-tested & organized, the programming style is extremely “classic” (or even archaic), with tons of COMMON blocks, GOTO statements, Hollerith characters, etc, etc… (almost anything that people in this forum may want to “elimintate” from the codes :sweat:). However, the size of the package is so large (possibly reaching 1 M LOC??), so it is too daunting and time-consuming for one person (or a few persons) to try to “modernize”…

However, the situation may be very different if the size of the program is more moderate. In that case, I guess the modernization would be much more tractable and also takes less time. (Even so, I think if the legacy program has tons of COMMON blocks, say, and just translate them to global module variables, the future utility might be rather limited (considering multi-threading, for example)). I think that kind of “restructuring” (global to type variables, for example) would be very useful for “modernization” (if to be done).

By the way, GAMESS is also used in this site (mainly for education purpose, probably), and it is very useful even today. Psi-4 (shown the next to GAMESS) is a newer C++ code.
https://chemcompute.org/

2 Likes

I totally agree… And I also remember having seen the response from committee people (in several sites) that “MATLAB and Python etc are like kitchen-sink, which is not the direction of Fortran” (or something like that). So some time ago I have lost hope for the future standard to add such supports (too early to give up? :sweat:)

1 Like

Does the process of rewriting the linear algebra code in Julia require incompatible changes to the language? Changes that result in previously working code giving either errors or silently giving results that are in error? If not, then the fortran issues are not the same as the Julia issues.

On the other hand, suppose that rewriting all that code does require introducing incompatible changes to the Julia language. Then how do you think the Julia community would accept those changes going forward. Would there be those who argue that backward compatibility is unimportant, and would there be others who argue that backward compatibility is too important to neglect?

These aren’t incompatible changes. Julia very specify does not specify that linear algebra will produce the same results between different versions (because any number of things can change these answers and they can even be non-deterministic).

Resistance to modernisation here (Oz government geophysics research) is that many of the new scientists are unfamiliar with Fortran, or worse, scarred by F77. Sure, they can call critical routines from python (/ c / matlab / …) but those routines become black boxes.

As time passes, those black boxes become blacker, and features (bugs …) are worked around in calling routines. Eventually, the codes are rewritten, but rarely in Fortran, since adding capability to older codes is somewhat like Jenga. It’s more ‘exciting’ to do things in a ‘modern’ language, since everybody knows that Fortran hasn’t been used for years.

For me, the “ain’t broke …” is short-term thinking.

$0.02.

5 Likes

The proposed changes to make implicit none the default in fortran or to remove implicit typing entirely most definitely are incompatible with 60+ years of fortran. I have seen many discussions on this issue, and not a single proposal for change has been backward compatible. They all propose breaking backwards compatibility to varying extents. In my previous reply, I was pointing out the false equivalence of the fortran situation with the Julia situation, and suggesting that if someone did propose breaking backwards compatibility (with the 10-year Julia history), then there would almost certainly be opposition to that, just as there is in the fortran community.

Oops, I misread what you were saying.

In my former area sticking to Fortran plus some modern features is the simplest and fastest way to push forward research. The same code can be run on PC and mainframes or different flavor of Unix and HPC. On all these platforms good quality compilers are available and optimized BLAS/LAPACK libraries are available by default. I don’t know if this works like this for other programming languages.

1 Like

I agree that costs should not be ignored.

Every time I’ve worked on adapting older codes to GPU’s with developers, folks are often bothered by the amount of band-aiding that happens to just get it to work. This cost is hard to measure because most groups don’t spend time thinking about the amount of time spent working through these details.

It’s about time that we started assessing the value of modernizing old code and complete the cost-benefit analysis, make a proper plan about what codes will get carried forward, and do away with the legacy that holds us all back.

1 Like

This string of posts popped up in my Hotmail account. After reading these posts, I realized,

  1. life is too short for these types of arguments
  2. I agreed with FortranFan, therefore proving that at least today, there is a God
  3. You use the code you need with the time available if you are a competent engineer
  4. My old workmate in Australia, recently said, the Fortran code you had us using in the late 1980s, ie MS Fortran V3.03 on DOS, is just now available on the modern packages.
  5. Powell’s group from UCB shows the best and worst of academe and Fortran.
  6. Bentley sells a Wind Turbine software design package for 120,000 USD, one could do the same thing with AXISHELL written in 1968 for 25 from UCB.

Just a thought.

1 Like

written in 1968 for 25 from UCB


The caveat, is you need to be a competent engineer, most of the people on this site are 1% of the population, think of the other poor sods.

This line of questionable thinking and essentially circular arguments gets to the bane of Fortran’s existence.

With implicit mapping, the problem has been the band-aid being considered as cure for life, introduce implicit none in every module or program unit and interface construct ad infinitum. This is unacceptable. And the very notion of backward compatibility in this context is meaningless. Few, if any, of the “practical” semantics from FORTRAN I are in effect, thrown out and deleted. To hold on to implicit mapping when practically every code that still employing that is broken relative to current standard in more ways than one is insanity. Exhibit A: NASTRAN BISLOC as blogged by @jacobwilliams .

1 Like

There is nothing in this reply that disproves my statement regarding backwards compatibility of the various proposed changes to the default implicit typing in fortran.

That is too bad. If there ever is a proposal to change the default implicit typing that is backwards compatible, I might support it.

That’s being disingenuous, willfully ignoring everything that been conveyed for so long re: implicit mapping.

Implicit mapping semantics in the standard is a Gordian knot and cutting the damn thing off has been the other way to handle it that has been eschewed for shortsighted reasons.

The supposed threat of legacy code maintainers walking away from FORTRAN, should implicit mapping be removed, increasingly looks highly desirable from a certain point-of-view.

Starting with a big vendor in the 1980s, certain positions have effectively only led to a managed decline of Fortran and the problem persists to this day. Perhaps the removal of implicit mapping accelerates it, another reason to adopt it sooner.

This is not a trivial issue, and if you think so, then you have not really been following the arguments on either side.

Yes it would be better if implicit mapping was eliminated (or at least declared obsolete) in the standard

This is your opinion, but one that has not been demonstrated in practice or even argued successfully here (or elsewhere). If you have a new argument that supports your case that will convince everyone else, then you should make it here.

but as long as the standards comittee is dominated by people with a personal (read commercial) interest in maintaining the status quo that will not happen (at least until the committee gets more enlightened members).

There is no evidence to support any of this. This is just a personal attack against people who you probably do not even know, unfairly questioning both their motives and their knowledge of the subject.

I decided a long time ago that the only way out of the current quagmire that is Fortran is to just fork the modern portions of the language and try to build a community and ecosystem like python or Julia around a new language.

This has already been tried twice, and I would say that both attempts failed to attract any significant mindshare of the fortran programming community. In fact, they might have even contributed to the decline of the popularity of the language in the 1990s.

Unfortunately, I see no other way to free the language from the tyranny of the standards committee and get the things that the user community wants in the language and not just the things the compiler developers (most of whom have probably never written anything more complex than Hello World in Fortran) think they can implement at minimum cost to their organizations.

More personal opinion with no basis in fact. I am not a committee member, I write both large and small programs in fortran, I routinely use implicit none when writing code, and I don’t agree with anything you said in your post. Yet I’m not attacking you personally, I’m just saying that your arguments are unfounded and unconvincing. I’m not telling you to stop arguing your case, I’m telling you that you need to do it better.

5 Likes

I feel like it is too late to modernize Fortran. If the effort started 20 years ago, there might be some hope. But now, the number of new users and projects using Fortran is almost negligible. From an economic perspective, any effort to modernize Fortran will have a negative payoff because that will cause troubles for old projects, and there are no new projects using Fortran.

1 Like

Fortran has been updated many times in the last 60 years, and also several times in the last 20 years. I do agree that there was a long period of contention in the 1980s that resulted in much of the loss of popularity of the language that we see now. But there are ways for normal users to have input into the direction of the language in the future. In many, perhaps even most, of those cases the new features can be added in a backward compatible way, so the decision whether to add the feature will just depend on the benefits. I would say that discussion groups such as comp.lang.fortran (active since the 1980s) and this newer group also help with hashing out the pros and cons of new features.

3 Likes

The “d” word instead to describe all the posts in question is “discourse”! And that only seems par for the course with this site!

Fortran is pegged by an ISO IEC standard that does not deal with “compiler switch” and like. So any such notion is irrelevant and inapplicable as the language evolution now inextricably depends on the standard.

See above. Anyways, an overwhelming majority of those doing computing have already voted with their feet and migrated away as can be seen in IEEE Spectrum survey of popular programming languages among engineers.

The title of this thread is “Resistance of modernization”. Nothing exemplifies the gist of this more than the insurmountable resistance to the removal of implicit mapping from the standard, a change that is >8 years away in terms of publication but which will never be in actual effect with any of the processor implementations today. That is, this change will have no practical consequence for any legacy apps on the behalf of whose maintainers the vendors are supposedly resisting the change.

All that the removal of implicit mapping accomplishes is to impart standard conformance in principle only to the codes that do not include implicit none while striving for explicit typing semantics. But this is huge. Though to achieve it in practice, any and all codes will still likely need to resort to some other action(s) that will be demanded by processors, a la what Intel Fortran does with -standard-semantics option to apply the semantics in the standard as opposed to some earlier nonstandard extensions that it does by default in order to appease their “important” legacy customers.

Even as this sucks for modern Fortran developers, the removal of implicit mapping will be a massive figurative advance for the language, a simple act that can only come into effect a decade from now. To keep in mind, entire new languages pop up and enhance and enable much of newer scientific computing in less than half as much time.

To see and understand and empathize with the ground reality for all the modern Fortran developers who have to do something outside the source to achieve standard behavior should be seen as a signal for the standard bearers how much the practitioners are open to a compromise and ponder the deletions of assigned and computed GOTOs, arithmetic IFs, Hollerith constants, etc. and realize the same codes supposedly holding on to implicit mapping are already broken with these and also often other nonstandard features. How does one “break” something which is already broken!?

But no matter what, for the foreseeable future those who control the votes on the committee will not relent, absolutely won’t. Therein lies the decline of the language.

Threads like these are occasions to shed light on the dark side, more discourse on this the better.

1 Like