Is the Fortran Standard crucial for the future of the Fortran language?

I could mention lots of them, although I am not sure this is the right thread to do so. It’s hard to make it short unless I totally skip reasoning. But at least I’ll try. So this is what I consider essential for scientific computing:

  • Compiled language, with a syntax that helps the compiler optimize the code. Interpreted languages (like Computer Algebra Systems or multi-paradigm languages) are good enough to quickly try a new idea, but when you realize the idea actually works, you usually need to do the rest of the job with a compiled language, because it is typically orders of magnitude faster.
  • Well-designed language that restrict the users from doing whatever they want with pointers and other structures. Pointers are not evil, they do have their uses, but using pointers for everything (and therefore giving the user the power to do whatever they want with them) is evil. It is a constant memory leak/segmentation fault threat, not to mention it makes it harder for the compiler to do its job optimizing the code. I do not want to run valgrind each time modifications are made in the code, simply because the language makes it so easy to create a hole somewhere (especially important when you work with others on the same code).
  • Decent array support. Scientific computing is literally full or matrices everywhere, and even non-scientific applications are full of them (like games or graphics in general). I don’t want a language that basically has no arrays at all, but rather a special use of pointers as “arrays”. Nor I want a language that does have arrays, but only “half-baked”, so you need to re-invent the wheel with templates and operator overloading in order to “teach” said language basic matrix arithmetics. Nor I want to rely on external libraries which do that, simply because the languages lacks those features. Instead, I would rather do the job with a language that does have such essential features built-in.
  • Proper decision-making features. For example, a case statement that you have to add break again and again for each case (otherwise execution flow will happily go to the next case instead of exiting) is, simply put, a broken feature. In 99.9% of the cases you want the break to be there and, whenever you don’t want that, the language should provide other decision-making statements for the task. Scientific computing is software, and software has bugs; the last thing I need is a broken feature that just adds another potential and completely unnecessary cause of bugs.
  • Any language where the standard way to declare a variable is the new command is out of question. I won’t even bother explaining why.
  • Object-oriented programming doesn’t simplify many scientific applications but, whenever it does, I want a language that doesn’t let you do whatever you want with class inheritance. This usually creates more problems than the ones it solves, and just stands on my way with unnecessary extra code maintenance.
  • In the vast majority of my work, I have to deal with complex number arithmetics. I don’t want to do that with a language that simply doesn’t know what a complex number is. And, again, I don’t want to patch the problem with an external library that does add these features, and may or may not work tomorrow or be compatible with my compiler version.

I could actually mention other reasons as well, but this is getting long already. Also, I avoided mentioning any specific language on purpose (although some of the reasons I mentioned do unavoidably “photograph” specific languages). Disagreeing with the above is perfectly fine, but arguing just to defend a language is pointless.

7 Likes