Using reserved words as variables

It also takes quite some cynicism to assume that I’m using that word for whatever you think I am using it for. It’s wonderful how you jumped to conclusions. But I don’t care you don’t care either.

But I claim defeat. I really cannot care less about what you folks took away from what I wrote. But clearly you are way too invested in whatever this is. So you are are right.

@ilayn , where are the “bug reports” though? The link you provided is a meta on the worklist to replace codes that use FORTRAN 77 (and related dialects) in SciPy. But such a worklist is not bug reports.

Now the bulk of the code to be replaced appears “wrappers” in FORTRAN 77 (and related dialects) for *PACK libraries implemented toward the SciPy take-off circa early 2000s, perhaps by some early SciPy enthusiasts (I noticed signatures of Pearu Peterson et al. in the .f files) who weren’t possibly the most experienced and knowledgeable Fortranners around and who adopted FORTRAN 77 (and related dialects) almost mechanically for the task and unfortunately stuck with it?

The first few of the .f files I looked in your Meta appear to consume LAPACK. Ostensibly, LAPACK is extremely robust and stable, especially since v3.11.0 dated Nov. 2022. So what issues do you have with LAPACK for example? Is it the inconvenience of working with older style "API"s, or some serious numerical bugs in the APIs themselves, say *GETRF routines for LU factorization? I seriously doubt it is the latter but I am most open to learning more and provide some input into any Fortran bugs if those are present.

2 Likes

I am getting a bit confused in this thread about what exactly is the current discomfort? Because amazingly there are multiple triggers went off. Facetious but nevertheless intriguing. Is it me saying the past is incompetent so it tickled certain folks or is it the fact that old code having bugs or me being passive aggressively accused of lying/exaggerating about the number of issues?

Let’s first clarify; maybe my english is not good enough but this is what incompetent means. Incompetent Definition & Meaning - Merriam-Webster Pick any one of them as what I meant. So I don’t care about your feelings about the past or your reminiscence of the olden days. Fortran code from 90s, (just like any code from the 90s in any language) is not capable of doing what current day code is expected to do. So let’s not insult each other’s intelligence. Users on this forum, openly ignore this fact however the code has to run on many architectures without causing trouble. Look at our CI/CD pipelines how many architectures we have to release SciPy for. So the language and the codebase should have been modernized but it didn’t. Not my fault. Not yours either.

So I hope noone here is going to attempt to say, gfortran + some flags is up to production grade. Then the blame is on you, not me, for being simply out of touch. Plus, just because you code in fortran does not nearly imply that your code is more scientific or you have the high ground. Hence I am not going to take anyone serious if they are writing double loop for matrix multiplication. Let’s agree to disagree on that. It doesn’t matter what anecdote you might have. That ship has sailed. And I am not going to reply to anyone on this. If the “scientific” language doesn’t give matmul infix operator out of the box then it’s not my feelings that need to get hurt. Example from internet fortran - Why does a manually programmed matrix multiplication combined with matrix addition give better performance than the intrinsic functions? - Stack Overflow That is an example of incompetence. I am not going to accept writing 20 lines of code to do (A * b)' * (C * X) in matlab notation. In Python it is much uglier (A @ b).T @ (C @ X) but still 10ish characters but unfortunately we can’t influence the mother ship that defines CPython from SciPy. However if matmul(A, B) segfaults in a language and they don’t fix it for decades; that IS incompetence. Just to disclose, I wrote my PhD thesis on Linear Matrix Inequalities for large control theoretical problems. So excuse me but I know what a long piece of linear algebra code means. I have all the scar tissue so save your loop arguments from 70s to the uninitiated.

Anyways, since the question was about bug reports, let’s take PROPACK. This piece of code has served millions of problems to date. It is, not only being my favorite old but gold code, but also, compared to other F77 code lying around, the most careful one to adhere “modern” sensitivities such as overflow/underflow detection, an effective split of concerns and a clear separation of computations, architecture awareness, OpenMP support, allowing for drop in replacement mechanism and so on. I really do think, it deserves all the credit it accrued over decades. The author clearly is not indifferent to his shortcomings of the day and left a lot of facilities for maintainability. If I’m not mistaken, that’s what he continued to do by working for Google later on.

However! this does not make it immune to the erosion of time. Let’s start with the amount of work it takes to get a decent library into a Python package. I am always smiling when I see some members of this forum saying how easy to wrap almost trivial effort to include a fortran lib because slap some iso_c_binding here and there and off to races, well let’s investigate shall we? Take a look at the files tab of the PR.

So then after we did that, here are randomly selected things we fixed over time; emphasis on random ! so don’t get the funny idea that these are all there are. You would be very wrong

BUG: Seg. fault in scipy.sparse.linalg tests in PROPACK · Issue #15108 · scipy/scipy · GitHub (after this we made PROPACK opt-in because segfaults didn’t stop, still open and we don’t know how to fix it yet)
BLD: more PROPACK fixes, removing timer code by rgommers · Pull Request #18421 · scipy/scipy · GitHub (second statement of fortran is both inaccurate and also anachronistic, fortran is bad in every kind of professional logging scheme, if you disagree then I need hardcore evidence compared to python logging library)
BLD: fix two `-Duse-g77-abi` regressions and a PROPACK bug by rgommers · Pull Request #18426 · scipy/scipy · GitHub
BUG: some tweaks to PROPACK f2py wrapper and build flags by rgommers · Pull Request #18263 · scipy/scipy · GitHub (read the comments in the code)
ENH: add wrappers for [zc]dotc functions for MKL compat by mckib2 · Pull Request #3 · scipy/PROPACK · GitHub (we have to fix czdotc shims because fortran/blas)

And these are just the fun ones.

In case you have been paying attention, this is a single line in my Fortran enumeration META: FORTRAN Code inventory · Issue #18566 · scipy/scipy · GitHub.

So what did I do? I read the friggin’ code and rewrote it. This should not have been me doing this. This should have been one of those lovers of fortran and linear algebra folks. But no.

There are many other lines that are causing issues or we cannot answer the feature requests; here is a recent list for slsqp META: `slsqp` optimizer · Issue #19130 · scipy/scipy · GitHub.

So what did I do? I read the friggin’ code and rewrote it. This should not have been me doing this. This should have been one of those lovers of fortran and optimization folks. But no.

Almost every line in my list has 10s of issues that we fixed here, removed there, and morphed etc. So when I say neverending I mean it.

The point here is to understand that we are not able to close these ones. That’s the problematic part. We are not a service portal number of issues don’t measure anything. We only can advise folks that we are aware of the problems and that’s about it. We don’t know how to fix in certain cases. If you ship a package for millions, there are lots of cases that your ifort and left mouse button convenience cannot provide an answer.

So another issue, compiling this f77 code for others; back in the day someone from here kindly offered to help us with NAG compiler for example over scipy. Then NAG declined to help, ifort is not free, so we have a single option that is gfortran. I’m trying to be positive here so let’s not talk about gfortran.

OK fine Lfortran, I get it, it is growing and getting taller by day. Don’t think a second that I don’t have massive respect for @certik and others who are doing something about fortran instead of the others just endlessly praising a de-facto dead language and doing nothing about it. However, that also doesn’t mean that I agree with his outlook on Fortran. I think the current team is exceedingly productive and the ecosystem is ripe for a fast Rust-like array language. So in my opinion if they drop off all the, excuse my french, historical shit behind and start tabula rasa, they would be adopted much better. And clearly they have the knowledge. I don’t quite understand the loyalty to this lang. This is not to belittle anything, (other than the users I mentioned above), people are even trying to leave C/C++ behind. So if you think fortran is worth keeping imagine C++. Hence nothing is sacred and if and when the times come, we should be objective. Not expecting a committee to come and save us.

Folks here didn’t quite get the point but let me repeat, first of all matlab scripting language uses 1 indexing because of legacy but internally uses 0 indexing so remove that one off the list. Then you have fortran, julia and R which are practically followers of fortran. So it is fortran’s fault to start with. If you are still at this debate, lots of surprises waiting for you when you want to link to other languages and be adopted for the low level programming.

We touched this issue a few times over scipy when there was some activity about @zaikunzhang 's PRIMA rewrite. Judging by the avatars, I think it was @hkvzjal who were looking into the example I gave but I don’t know what happened afterwards about noncontiguous array slices of NumPy. If you keep using contiguous column major arrays, you can kiss goodbye to your precious performance gains if you glue to anything that is row major. Since fortran and julia are the ones left in the column major club, you would be playing on your own. If you haven’t done so, work a bit with C code or NumPy or Rust and you’ll see that more often than not you would be creating array copies left and right because of the mismatches. So I’m not making this up from my ass. That’s all we do in the Cythonized code dealing with this silly transposing interplay.

Second to last, because I can’t miss that, Pearu Peterson et al. did not adopt F77. He is the author of f2py that allowed fortran to be relevant for all python users for decades. So I’d careful who I am pointing to as not being up-to-par fortranners. They included these F77 libraries because, there was no other better version, in fact there was nothing else. Like I said we should not have been doing these rewrites. People like you who give a shit about fortran should have been doing this out of historical responsibility. Also you can feel the fortran/matlab influence in NumPy. So I think you should thank them the most for some of the choices made in NumPy even though there is nothing in NumPy that is fortran (including the blas/lapack). So basically they are doing this as a favor to the community.

@ivanpribec writes

Unfortunately, I cannot live off of refactoring Fortran codes, hence I had no motivation to contribute it to SciPy, despite being a user

None of us are living off of this. I know you mean well, I truly do. But that’s the thing I see here in this forum in the last few days. Because I read a lot in the last few days :smiley: We are treated as if we are some sort of exclusive money making snob asshole club. One guy was angry because I used “get rid of” in a sentence and wrote off the whole SciPy as shit, I mean that was fun :smiley: Some guy got triggered because I said incompetent although the definition fits perfectly like a glove (there is a thread about an http library next to this one I mean all the glory but did you folks write any nonscientific code lately? Do you realise how much fortran is missing?). Someone else thinks Julia people are trying to overthrow Fortran like as if it is not self-destructing itself. Apologies for saying this but this is just hilarious. Never had this much interesting takes in a forum for a while.

Someone was arguing that HPC is the only thing that matters in scientific code and LANL people are just delusional just to question Fortran usage. The audacity! The blasphemy! Some think we are bunch of amateurs in other languages punching shitty code and having wrong ideas about how scientific algorithms should work. Whether that is true or not still, I have to confess, I find it really funny but also more importantly disheartening. Some of the maintainers of these Python packages are actually experts on their fields. They are all sacrificing from their social life to do a little bit of thankless good only to get shit like above :slight_smile: . Don’t make the noob mistake that just because we don’t like fortran, we don’t know how to work with it. Like I said I started with matlab 5 and fortran 90. I’m not a good fortran coder, and I would like to stay that way. Incredibly enough I never thought that keywords can be variables. Because which competent language allow that?

I hope this gives some insight to the brain of a single maintainer of an obscure package in python who does not represent that package or python or its users. Don’t take things this seriously. Only mistake I did was to give a candid response about fortran because I do think it is just unsalvagable mess in 2023. But hey it is my candid stupid opinion. It is you that has the option to get offended :slight_smile:

Anyways I spent enough time here, though it was fun has to come to an end. I thank again all for their explanation and help. Take care. Looking forward to what comes out from LFortran folks.

P.S.: Julia included, many scientific packages are rewriting their own {GE/SY/PO}TRF because small array performance n ~< 100-150 of LAPACK libs MKL/OpenBLAS/BLIS is not good enough and have large overhead (nobody uses reference implementation from netlib anyways which is supposed to be fast because fortran right?). So I would like to emphasize, nothing is sacred or untouchable.

We call it LPython: https://lpython.org/. It is as fast as LFortran, it has the syntax that you love, it’s 0-indexed and row major ordered.

2 Likes

@ilayn Thanks for the frank feedback, I appreciate it.

I can clarify my motivation: I want a fast array language for numerical computing. I am quite agnostic about the syntax. As LPython shows, you absolutely can use Python itself. However, the whole internal design of LPython is guided by Fortran, since in my experience, Fortran is the most mature and advanced array language, despite all the faults that you mentioned (and many more that you didn’t mention), internally in LPython/LFortran we use the parts of Fortran that make sense and represent it faithfully, we try to improve upon things where it makes sense, and we do not try to represent things that do not make sense to me, like arithmetic if, common blocks and such (we handle those by the LFortran frontend, but not have dedicated nodes internally in ASR).

In my experience, focusing on 1-based or 0-based indexing, or column/row order is a distraction. We support both in ASR, and LPython / LFortran “hides” this, so as a user of the frontend you do not know. LPython behaves exactly like a subset of Python/NumPy, and LFortran behaves exactly as Fortran.

Where I want to focus is performance, modern hardware support (GPU), interactivity (Jupyter), WASM, etc. Yes, for LFortran our focus right now is to just compile codes, including SciPy. We have to do it, so that LFortran gets a user base, I don’t think there is any other way around it. Once the LFortran frontend compiles all codes, we shift our focus to the performance optimizations and backends. There is a lot that can and should be done.

3 Likes

No one here has anything against you as a person. But indeed, against your words, which shows signs of disrespect for the decades of work by your father’s generation and blind hatred for something that you confess you barely know and are unwilling to know.

Your lengthy comment has too many biases, misconceptions, and misinformation. If you had Abraham Lincoln’s attitude toward unknowns, I’d have continued this discussion, but as you confessed,

I’m not a good fortran coder, and I would like to stay that way.

your attitude toward unknowns is to avoid them. So, I will suffice to fix only one piece of misinformation in your comment:

Intel ifort is not free.

Intel ifort has been freely accessible to the academic and open source communities for nearly a decade and to all living beings in the Universe since around 2017.

1 Like

For the record, I don’t think of Python programmers as snobs. I’ve been using Python and SciPy for the past 6-7 years with great success. As a Fortran programmer I am grateful to the SciPy community for giving these dusty decks a second life, and also uncovering a few of bugs in the process.

If the Fortran parts pose such a big obstacle for the maintenance of Scipy, I bet the SciPy community will find the best solution. If this means manual rewriting for you, in your free time, great job :+1:

To me it sounds like you are the perfect person. You have the time and will, you have a PhD, and you have experience with SciPy internals. What I don’t understand is why you victimize yourself for fixing code you received for free, and consciously decided to work on in your free time. You could also write PROPACK from scratch based on the method description if thats easier.

4 Likes

This could not better describe the post, and wild disinformation is by far the most critical point here. Not to mention the typical troll attitude “I don’t care about you and what you think, you idiots”. Note that the discussion was fairly civilized, until the OP sparked the flames for absolutely no reason.

Ignorants are the usual fuel for crusades (the exact opposite of “Crusade against ignorance”).

1 Like

@ilayn, can you please spike out one issue which in your mind is specifically related to Fortran language? Column-major and 1-indexing do not count here, for those decisions go back to original design of Fortran.

1 Like

Well at some point this is one-handed boxing, but ok, it is totally fine that you might not be aware of license intricacies.

ifort, oneapi-mkl and other tools are under ISSL. So free in the sense of open source software not that we need to pay for it. As a typical example, conda can redistribute and build against MKL. We can’t until ISSL situation is cleared up and OSI approved in a usable manner. Also ifort + OpenBLAS doesn’t go well together so you need MKL. So probably you are clicking the EULA as fast as possible like I do

I like your self confidence nevertheless :slight_smile: I am trying to mention that I don’t consider myself a competent fortran coder and I am not planning to learn fortran. Sometimes you need to read slowly instead of trying to react and fail at that.

I am not victimizing I am just disappointed for all the time spent here online but not bothering to renew things making things appealing to newcomers. There are so many threads here let’s do fortran this or that, but only very few people actually do anything other than continuously blaming others for the decay of fortran. Otherwise I’m quite alright, thank you very much.

1 Like

@ilayn, “disliking” is different from “not planning”. I did not enter this discussion for pointless arguments. I sympathize with your experience. I have had days and nights refactoring FORTRAN code that I do not want to remember. But that does not entitle me to call the past “incompetent”. They did the best they could, given the circumstances. I still endure the unbearable pain of untangling nested GOTO statements as needed. If the pain is too much, I start from scratch. In either case, I am thankful to the past generation of developers for their brilliance and for achieving something no community can achieve today, even with the vast technological and tooling advances.

3 Likes

I agree with this sentiment. As you might have noticed, you and I had an initial discussion at the SciPy issue, I took your feedback, which was very helpful, and after that I am just focusing on implementing things, I don’t engage in theoretical discussions, since I already said what I had to say.

I will point out that many things you personally do not like about Fortran that you mentioned above I also don’t like, and in fact I am pretty sure most people here actually agree with you.

One thing to keep in mind is that every single long time Fortran user had a variant of the above discussion many many times. For you it is first time, but for us it’s the 100th time. I have noticed some people in the Fortran community get tired of hearing the same arguments against Fortran over and over. I don’t, it doesn’t bother me. I am always happy to discuss.

My personal approach is to just focus on fixing Fortran.

I agree, and also I did these discussions in 2000s before I gave up on fortran. So this is not new for me either. I did not know that I was casually striding in a semantic minefield.

I am just trying to say, and after the above back and forth, I am directly addressing you and noone else :slight_smile: that, maybe you like the mental challenge of it, or some completion OCD is claiming you, however you should not be implementing fortran’s controversial and rather old “warts” if I were you. But who am I to say what to spend time on.

That’s why I mention the “new” language. You might say, you want to build upon the fortran community and I get that. However, and this is completely my narrow view, is that there is a significant array language needed without fortran syntax clouding things. Maybe using another language as an analogy might give a better view.

Imagine Rust or Zig doesn’t exist, and you want to achieve something better as opposed to worsening image of C++. You might say well I want to stand above the existing community which is terribly fragmented. So you kick off LCpp project. And fight with all hardcore C fans, and also OOP purists. Or you might say, well, maybe we need something new to remedy things. And kick off another project. You know where fortran falls short and what is good to keep. Anyways, you get the point.

But again, you folks are already doing a huge work by this site, and the main website, Lpython and so on. Hence it’s not lost on us.

This was all I wanted to say and only to you :sweat_smile: I hope this finalizes this thread.

1 Like

Discussing is a one thing, and there are many discussions here with various point of views about Fortran. Sparking an n^{th} sterile controversy driven by ignorance (*) and using contemption and disinformation is another thing. The OP did not want to fairly argue with the community here, but rather to spit out his highly negative opinion as if he had to enlighten the herd of the poor Fortran users.

(*) In this forum there are also regular controversies with a member. But at least this member has a valuable knowlegde of the Fortran world and knows what he is talking about, even if I often strongly disagree with some of his views.

1 Like

Yes, it has the advantage to be totally natural for students. Teaching that in C you must count from zero is a loss of time and energy. \mathbf{N}^\star is natural for humans, zero is a late invention.

4 Likes

0-based indexing makes sense in C, as it is a direct consequence of the pointer arithmetic, and C is all about pointers. But in languages where pointers are abstracted it just becomes a convention which is inherited from the past.

It’s kind of fun to bash the 1-based indexing when one pretend promoting high level concepts and abandoning lower level ones, while the 0-based indexing entirely comes from a low level concept. And regarding Fortran this is even more a pointless controversy, as Fortran is any-based indexing, as everybody (?) knows here.

And it’s even more fun for a worldwide linear algebra expert to bash 1-based indexing, while the vast majority of math literature depict matrices and vectors with 1-based indexing.

2 Likes

Yes, in Fortran we are free to choose bounds in \mathbb{Z}, when needed:

And of course 0 is possible for the lower bound, if necessary…

@PierU Yes, I think you are saying the same thing as I did with different words: for us it’s the n-th such discussion. @ilayn is frustrated with the Fortran tooling, like many others. Maybe some of his arguments are not refined, but if you are able to see past that and try to understand the fundamental underlying problems, those are 100% real. He posted very good factual feedback here: RFC: Get rid of linalg.interpolative Fortran code · Issue #18367 · scipy/scipy · GitHub, I encourage everyone to read it. You can see my response in the next comment also. Those are not all the issues that we need to fix, but it’s a start.

Important: the fix is not to just post the comment that I did. That just shows that we are aware of the problems and have a plan how to fix it. Yes, it was a lot of work just to get to this point to have a plan that can actually succeed. But the most important thing is to actually work on fixing it. The issues are real and they won’t fix themselves.

I posted the comment on April 27, 2023. So what has been fixed since then?

Actually, quite a bit. We have implemented physical types in LFortran, which lays the ground work to support NumPy descriptors directly without an overhead. We have managed to compile several more codes (including specfun and minpack in SciPy and all SciPy tests pass), we will publish details in a blog post soon.

I encourage everyone in this thread to help. You can help by contributing to compilers. You can contribute to the Fortran Package Manager to make it more robust. You can contribute by creating high quality modern Fortran libraries (that among other uses also SciPy could use). You can contribute to the fortran-lang webpage and documentation. We need all of this and more.

I am personally just focusing on the compiler parts. If anyone is interested in contributing, please get in touch, I’ll help you get up to speed. You don’t need a prior compiler experience, I will teach you everything, as I have done to many other contributors.

It’s really fantastic that i got to get the blame. I didn’t address anyone I didn’t address anyone I didn’t care about fortran. I addressed directly you @certik :blush: Because I do think your efforts are really valuable.

Then some people got offended and starting defending fortran because who knows. I have been so far called a troll then liar then ill intentioned and when I prove otherwise hoping that it wouldclear up things I still get no hint of self-correction. But goal post moves. I didn’t bother piling up on fortran I gave meticulous examples of how f77 code hampers us. You dont seem to read things. You asked for it.

Let me put differently, you folks are really welcoming. All the best for you and this little haven you found refuge. But can we stop this thread already? I have better things to do, probably you too.

This is an idea that is proposed every so often: take modern fortran, remove all of the ugly legacy stuff, and then move forward with the new sleek clean language that is left over. This has been attempted with actual working compilers in the past. Two attempts that attained some short term popularity were the F and the ELF languages.

https://dl.acm.org/action/showFmPdf?doi=10.1145%2F203594

Neither of these were ever really very popular. I think that was not because of any single glaring failure, but rather because of a variety of small missing features that, over the population of fortran programmers, just all added up.

I think these attempts demonstrate a fairly well known fact – it is difficult to remove a feature from a programming language. Once a feature is added, there will almost always be some programmers who find it useful, and perhaps even essential. One consequence of this is that we all need to be vigilant when new features are added to fortran. We need to ensure that they are good features, and as reliable, as well tested, and as useful as possible. If a bad feature is added, it is not only difficult to remove (e.g. real do loop indices, which are still even today supported legacy features in some compilers), but it crowds out the good versions of those features that might be proposed later. And of course, there is the possibility that when enough bad features are added to the language, it will then die off completely.

The OP thinks that programmer-defined lower bounds to arrays are a bad idea, and wants to remove that feature from fortran. That will not happen simply because there are other fortran programmers who like the flexibility and expressiveness that the feature provides.

2 Likes