Anecdotal Fortran... :-)

But those frameworks are more akin to libraries. JavaScript the language is highly backward-compatible, arguably even more so than Fortran. :slight_smile: Anyhow, sorry for the tangent, your point is otherwise clear and I agree with it.


It’s an interesting thought. The overall structure of Fortran, having many built-in functions no standard library (excuses to the stdlib team here :wink: ), is also very unusual in comparison to other languages such as C, C++, and Python. It is probably a heritage of being the first high-level language and also has its good and bad sides

1 Like

This is a fantastic thought. Thank you for sharing the wisdom!

1 Like

@feenberg ,

You think in your own way you may not be able to follow the supposed teachings behind the proverb, “perfect is the enemy of good?”

That you may be holding your current FORTRAN solution around Taxsim as perfect and that’s why you have a converter output from f2c you ship at times and you seek further converters with Python / SAS? Because once FORTRAN is “converted” with f2c, one can strive to move forward with the output from f2c since it can then be consumed in different ways and not necessarily have to then refer back to the original FORTRAN source.

You know your users typically have Python; since some of your users do not have FORTRAN so you have to take extra steps the time and effort toward which can be utilized elsewhere; you have to continue with coding practices that have long been make obsolescent per ISO IEC document e.g., DATA statement, etc. Thus it would appear there are quite a few good reasons to upgrade, is something “perfect” out there you need to see before you upgrade?

Also, if you give the above proverb a chance and not see the future computations within the narrow lens of a “local optimization” (Knuth and evil and all that), you will likely find a new Python based Taxsim, though it may never arrive at the perfection of your current solution, will be something all your users can make use of since they already have Python.

Fortran too will benefit if you place FORTRAN in your rearview mirror: Taxsim at least won’t be mentioned as the reason to not make implicit none the default in Fortran!

Moreover, if you accept the above proverb, you may realize to not seek converters. You can try to write your program without a programming language (you will know what some call that pseudocode). Once you do, you or your newer incarnations will be able to write the program in any of the other languages, Python or whatever. It may not be perfect but it will be good.

I don’t know another way to say this, but this comment is great!

Thank you @feenberg!

Yes, we have to support old or even deleted features in LFortran if codes still use it. For example we are in the process of writing a dedicated fixed-form parser, because it is still widely used; and it does not matter that new code is written in free-form which we can now fully parse, modulo potential bugs. It’s a little painful, but overall not that big of a deal. Once implemented, the added maintenance is not huge.

1 Like

In the “spirit” of supporting old / deleted features, the standard shall default to fixed-form source; free-form source shall require

implicit none (fixed_form)

in every scoping unit.

1 Like

@kargl it’s just a joke. After all, we are in “Anecdotal Fortran” thread. (Although it does have a point that I agree with, but we already discussed it elsewhere, I don’t have anything else to add.)

Things like this happen if ‘service’ departments who should support researchers don’t understand the reason for their existence and get more power than the people that actually do the work. A short-term solution is that researchers spend their time on finding loopholes instead of doing their job. In the long-term, motivated people will leave such an environment and the clock-punchers will stay.

Forcing everyone to type implicit none at the top of every file is a (very mild) version of the same behavior: ignoring the users needs.

I’m currently working on a Fortran code where generic programming patterns have been emulated with the help of pre-processing. It’s the short-term, loophole solution and results in horrible code. All attempts of autocompletion are doomed, even grep does not help to find the definition of a data structure or function. Needless to say that the intended long-term solution is the migration to C++…

Ah ok, I misunderstood. I thought you posted your reply as a reaction to Anecdotal Fortran... :-) - #244 by FortranFan.

That is the new Flang, the one in LLVM.

Nice read:


Prime Computer was a company that produced minicomputers for CAD, CAM, and FEM applications.
Even though the company was forced out of the market in the early 1990s, LAPACK still contains code to detect it’s code page: lapack/lsame.f at 79bfdd46de1d6451abf48f99f93b283b972f5b85 · Reference-LAPACK/lapack · GitHub

I remember porting some code to a Prime computer in about 1980. They did have this convention that ASCII characters had the 8th bit set, while most other vendors used the convention that the 8th bit was 0. A consequence of this is that if you read a tape from, say DEC, and then opened the file with a text editor, it looked normal. But you could not do anything with it such as search for character string matches because the matches would always fail. The fortran compiler also could not compile the code because it did not recognize any of the characters, despite them looking normal on screen. To fix this problem I remember writing a little fortran program that I called FORCE8, that would set the 8th bit on for every character in a file.

One other thing about the LSAME function is that it was an f77 code written before IACHAR was added to the language. With IACHAR, the value that is returned is always the ascii character, so you only need to do one set of comparisons rather than one for ascii, one for ebcdic, and one for ascii with the 8th bit set. Having IACHAR and ACHAR simplified a lot of text processing operations in fortran.


The job advertisement in that Twitter thread made me laugh – it would never be posted today.

1 Like

The revised edition of this classic could be useful for someone implementing special functions.

Whittaker and Watson

Posted on 20 July 2022 by John Cook

Whittaker and Watson’s analysis textbook is a true classic. My only complaint about the book is that the typesetting is poor. I said years ago that I wish someone would redo the book in LaTeX and touch it up a bit.

I found out while writing my previous post that in fact someone has done just that. That post explains the image on the cover of a reprint of the 4th edition from 1927. There’s now a fifth edition, published last year (2021).

Someone may reasonably object that the emphasis on special functions in classical analysis is inappropriate now that we can easily compute everything numerically. But how are we able to compute things accurately and efficiently? By using libraries developed by people who know about special functions and other 19th century math! I’ve done some of this work, speeding up calculations a couple orders of magnitude on 21st century computers by exploiting arcane theorems developed in the 19th century.

I opened an issue for this (Simplify lsame · Issue #701 · Reference-LAPACK/lapack · GitHub) using iachar to bypass the need to maintain branches for different character encodings.

From what I see, the reluctance to add modern Fortran to BLAS/LAPACK is due to several legitimate reasons:

  • massive reformatting effort, which doesn’t necessarily bring any new features (i.e. note the code in my issue is both valid free-from and fixed-form, conforming with F95 and higher)
  • losing the opportunity of using F2C to provide LAPACK functionality to C users who may not have Fortran compilers available
  • using a restricted subset of the Fortran language such as F77, makes it easier to port LAPACK to other platforms such as Java or .NET via custom transpilers/compilers
  • it would raise the requirements on compiler versions, and having to dropping support/compatibility on certain platforms

The FORTRAN-hating gateway:


I wonder if the source code of FLPL is available somewhere, and if a modern version of FLPL written in modern Fortran would be worthwhile.

A Fortran-Compiled List-Processing Language (1959)
Authors: H. Gelernter, J. R. Hansen, C. L. Gerberich
International Business Machines Corp., Yorktown Heights, N.Y.

Abstract. A compiled computer language for the manipulation of symbolic expressions organized in
storage as Newell-Shaw-Simon lists has been developed as a tool to make more convenient the task of
programming the simulation of a geometry theorem-proving machine on the IBM 704 high-speed
electronic digital computer. Statements in the language are written in usual FORTRAN notation, but with a large set of special list-processing functions appended to the standard FORTRAN library. The algebraic structure of certain statements in this language corresponds closely to the structure of an NSS list, making possible the generation and manipulation of complex list expressions with a single statement. The many programming advantages accruing from the use of FORTRAN, and in particular, the ease with which massive and complex programs may be revised, combined with the flexibility offered by an NSS list organization of storage make the language particularly useful where, as in the case of our theorem-proving program, intermediate data of unpredicable form, complexity, and length may be generated.