Yes, such “very …high (A)Intelligent …species” designed by the very same cadre of “leaders” now in charge “who “all” are responsible for “corruption and injustice””!!! You don’t need “AI” at all to guess the outcome …
I think we should try to avoid general social commentary here.
I believe that us Fortran programmers should consider how we can adapt to this rapidly changing world. In early 2023, when the AI subject became so popular, thanks to ChatGPT, I thought that such discussions were somewhat like science fiction. I still maintain that ChatGPT is merely a tool, and, quite evidently, a simple text completion tool at that. However, my perspective has shifted. If all goes as planned, starting from October, I won’t need to work for more than 15 minutes a day for the rest of my life because my Fortran-based AI assistant will handle the workload for me. Fortran users find themselves in a privileged position; even on a personal laptop, you can run AI models that would require a small supercomputer in the case of Python. Let’s take advantage of this opportunity!
P.S. This post’s English has been enhanced by ChatGPT.
I have just read the following article regarding AI-assisted programming, quite interesting. Interestingly, the programming task used in the experiment conducted on MIT, was to solve a problem in the Fortran language, which none of them knew.
Sorry for reviving an old topic but I did not want to start a new one.
From the article
“This is an important educational lesson,” said Klopfer. “Working hard and struggling is actually an important way of learning. When you’re given an answer, you’re not struggling and you’re not learning. And when you get more of a complex problem, it’s tedious to go back to the beginning of a large language model and troubleshoot it and integrate it.”
I recall the famous mathematician R.L. Moore (known for the so-called “Moore Method” or “Inquiry Based Learning” approach to teaching mathematics that focuses on having the student develop his or her own proofs of key results) described his philosophy as “The student who learns best, is told the least.”
I fear that children will take this AI hype all too seriously, and not learn to use programming as an aid to critical thinking.
Here is a good video that is a very reasonable and honest test of what these LLMs can do. This technology is much worse than even a novice developer would do. Complicated code for the use of numerical methods in engineering and science looks pretty safe for the foreseeable future.
Indeed, programming is a school for rigor, and one of the most demanding (one wrong character and everything fails…). It is a sufficient reason to teach and learn programming.
As someone who looked into this problem more than I want to admit, Morton orders, cycle-chasers, I don’t think there is a good answer for this question at all.
I tried recursive or blocked transpose before and one version was in SciPy before I removed it again. It is unnecessarily tricky to come up with a scheme that will work properly across all hardware unless you want to dig down into architecture detection in your code; something beyond my paygrade unfortunately. Cache-oblivious algorithms are a different type of beast and I don’t know how to do it properly, again for all hardware out there.
Considering the whole world still needs to link to an F77 library to do linalg, I think LLMs are last of our worries.