Anecdotal Fortran... :-)

I use gfortran for Linux, Windows, FreeBSD and OSX. Source code is exactly
the same for all instances. Windows is cross-compiled on a FreeBSD
machine. I love cross-compiling, and I believe a knowledgeable person could
create all four executables on any one of those systems. But it is
complicated to configure, and the port of ming32-gcc for FreeBSD is the
only cross compilation tool I was able to get working. It seems odd to me
that one can’t just:

gfortran prog.for --target=XXX -static -o prog

to get a working executable for target XXX. To even start down that path
one would need to recompile all the libraries on the target machines.

Users with source code often have ancient versions of gfortran. I use
4.8.5 (2015) which is available as an RPM but if they do not have that f77
has often been available. Some agencies remove f77 and f2c, but they
generally forget to remove libf2c, so I can send c source from f2c.

I would love to have a fortran to Python or SAS converter, as there is
never any effort to prevent the import or execcution of programs in those
languages and my users typically have access to both.

Daniel Feenberg

2 Likes

I don’t know whether these compilers have the option, but as far as I know, @certik plans to support such features. As far as I remember, flang also aims to support older standards.

In my opinion, it is sad that support for things like fixed form and implicit typing, which are not state of the art, is still needed today. The resulting code, which is hard to read for today’s standard, unfortunately shapes the image of Fortran. Since we are here in the “Anectodal Fortran…” thread, my experience from teaching the “Mathematical Engineering” students at KU Leuven is appropriate to mention: These “number crunchers of the future” prefer in both courses I’m involved (“Parallel Computing” and “Project Mathematical Engineering”) prefer with 80:20 C++ over Fortran. :frowning:

2 Likes

I agree that striving to perfection is not an efficient attitude, but I also don’t see why one should not revise old decisions in the light of new circumstances. There were good reasons for implicit typing and fixed form when Fortran was programmed on punch cards. Both features have been designed by experts and it is plausible to assume that they where the best that could be done. Now the situation is different and the majority of programmers prefers and is used to free-form style code with explicit variable declaration (see my anecdote about the students in the answer to @kargl).

To clarify my point of view: I clearly see the benefit of backward compatibility and shudder when I see web development where one framework replaces the other faster than Fortran standards are published. But there is also a price to pay. Consider for example computer screens: If we would still use VGA, today’s resolutions would not be possible. So one always needs to weight the pros and cons, and in my opinion the tendency for the Fortran standards is one-sided on backward compatibility. This means that old code runs without changes. This is nice, and I acknowledge that. But the opportunity to position Fortran as the language of the future for numerical science and engineering is missed.

1 Like

But those frameworks are more akin to libraries. JavaScript the language is highly backward-compatible, arguably even more so than Fortran. :slight_smile: Anyhow, sorry for the tangent, your point is otherwise clear and I agree with it.

2 Likes

It’s an interesting thought. The overall structure of Fortran, having many built-in functions no standard library (excuses to the stdlib team here :wink: ), is also very unusual in comparison to other languages such as C, C++, and Python. It is probably a heritage of being the first high-level language and also has its good and bad sides

1 Like

This is a fantastic thought. Thank you for sharing the wisdom!

1 Like

@feenberg ,

You think in your own way you may not be able to follow the supposed teachings behind the proverb, “perfect is the enemy of good?”

That you may be holding your current FORTRAN solution around Taxsim as perfect and that’s why you have a converter output from f2c you ship at times and you seek further converters with Python / SAS? Because once FORTRAN is “converted” with f2c, one can strive to move forward with the output from f2c since it can then be consumed in different ways and not necessarily have to then refer back to the original FORTRAN source.

You know your users typically have Python; since some of your users do not have FORTRAN so you have to take extra steps the time and effort toward which can be utilized elsewhere; you have to continue with coding practices that have long been make obsolescent per ISO IEC document e.g., DATA statement, etc. Thus it would appear there are quite a few good reasons to upgrade, is something “perfect” out there you need to see before you upgrade?

Also, if you give the above proverb a chance and not see the future computations within the narrow lens of a “local optimization” (Knuth and evil and all that), you will likely find a new Python based Taxsim, though it may never arrive at the perfection of your current solution, will be something all your users can make use of since they already have Python.

Fortran too will benefit if you place FORTRAN in your rearview mirror: Taxsim at least won’t be mentioned as the reason to not make implicit none the default in Fortran!

Moreover, if you accept the above proverb, you may realize to not seek converters. You can try to write your program without a programming language (you will know what some call that pseudocode). Once you do, you or your newer incarnations will be able to write the program in any of the other languages, Python or whatever. It may not be perfect but it will be good.

I don’t know another way to say this, but this comment is great!

Thank you @feenberg!

Yes, we have to support old or even deleted features in LFortran if codes still use it. For example we are in the process of writing a dedicated fixed-form parser, because it is still widely used; and it does not matter that new code is written in free-form which we can now fully parse, modulo potential bugs. It’s a little painful, but overall not that big of a deal. Once implemented, the added maintenance is not huge.

1 Like

In the “spirit” of supporting old / deleted features, the standard shall default to fixed-form source; free-form source shall require

implicit none (fixed_form)

in every scoping unit.

1 Like

@kargl it’s just a joke. After all, we are in “Anecdotal Fortran” thread. (Although it does have a point that I agree with, but we already discussed it elsewhere, I don’t have anything else to add.)

Things like this happen if ‘service’ departments who should support researchers don’t understand the reason for their existence and get more power than the people that actually do the work. A short-term solution is that researchers spend their time on finding loopholes instead of doing their job. In the long-term, motivated people will leave such an environment and the clock-punchers will stay.

Forcing everyone to type implicit none at the top of every file is a (very mild) version of the same behavior: ignoring the users needs.

I’m currently working on a Fortran code where generic programming patterns have been emulated with the help of pre-processing. It’s the short-term, loophole solution and results in horrible code. All attempts of autocompletion are doomed, even grep does not help to find the definition of a data structure or function. Needless to say that the intended long-term solution is the migration to C++…

Ah ok, I misunderstood. I thought you posted your reply as a reaction to Anecdotal Fortran... :-) - #244 by FortranFan.

That is the new Flang, the one in LLVM.

Nice read:

3 Likes

Prime Computer was a company that produced minicomputers for CAD, CAM, and FEM applications.
Even though the company was forced out of the market in the early 1990s, LAPACK still contains code to detect it’s code page: lapack/lsame.f at 79bfdd46de1d6451abf48f99f93b283b972f5b85 · Reference-LAPACK/lapack · GitHub

I remember porting some code to a Prime computer in about 1980. They did have this convention that ASCII characters had the 8th bit set, while most other vendors used the convention that the 8th bit was 0. A consequence of this is that if you read a tape from, say DEC, and then opened the file with a text editor, it looked normal. But you could not do anything with it such as search for character string matches because the matches would always fail. The fortran compiler also could not compile the code because it did not recognize any of the characters, despite them looking normal on screen. To fix this problem I remember writing a little fortran program that I called FORCE8, that would set the 8th bit on for every character in a file.

One other thing about the LSAME function is that it was an f77 code written before IACHAR was added to the language. With IACHAR, the value that is returned is always the ascii character, so you only need to do one set of comparisons rather than one for ascii, one for ebcdic, and one for ascii with the 8th bit set. Having IACHAR and ACHAR simplified a lot of text processing operations in fortran.

3 Likes

The job advertisement in that Twitter thread made me laugh – it would never be posted today.

1 Like

The revised edition of this classic could be useful for someone implementing special functions.

Whittaker and Watson

Posted on 20 July 2022 by John Cook

Whittaker and Watson’s analysis textbook is a true classic. My only complaint about the book is that the typesetting is poor. I said years ago that I wish someone would redo the book in LaTeX and touch it up a bit.

I found out while writing my previous post that in fact someone has done just that. That post explains the image on the cover of a reprint of the 4th edition from 1927. There’s now a fifth edition, published last year (2021).

Someone may reasonably object that the emphasis on special functions in classical analysis is inappropriate now that we can easily compute everything numerically. But how are we able to compute things accurately and efficiently? By using libraries developed by people who know about special functions and other 19th century math! I’ve done some of this work, speeding up calculations a couple orders of magnitude on 21st century computers by exploiting arcane theorems developed in the 19th century.