Large Language Models and The End of Programming

Below is an email I received this morning about an upcoming talk.

The talk with the same title has been given before, but it has probably been updated over the past months: ACM Chicago Feb. 8, 2023 webinar - Large Language Models and The End of Programming (1080HD) - YouTube

Also: The future of programming with large language models

---------- Forwarded message ---------
De : ACM Learning Center
Date: ven. 5 mai 2023 à 08:06
Subject: Reminder: May 9 Talk, “Large Language Models and the End of Programming” with Matt Welsh, CEO of

Reminder: May 9 Talk, “Large Language Models and The End of Programming” with Matt Welsh, CEO of

If you haven’t done so yet, register now for the next free ACM TechTalk, “Large Language Models and The End of Programming,” presented on Tuesday, May 9 at 12:00 PM ET/16:00 UTC by Matt Welsh, CEO of Oana Olteanu, Partner at SignalFire, will moderate the questions and answers session following the talk.

Leave your comments and questions with our speaker now and any time before the live event on ACM’s Discourse Page. And check out the page after the webcast for extended discussion with your peers in the computing community, as well as further resources on large language models, generative AI, and more.

(If you’d like to attend but can’t make it to the virtual event, you still need to register to receive a recording of the TechTalk when it becomes available.)

Note: You can stream this and all ACM TechTalks on your mobile device, including smartphones and tablets.

The field of Computer Science is headed for a major upheaval with the rise of large AI models, such as ChatGPT, that are capable of performing general-purpose reasoning and problem solving. We are headed for a future in which it will no longer be necessary to write computer programs. Matt believes that most software will eventually be replaced by AI models that, given an appropriate description of a task, will directly execute that task, without requiring the creation or maintenance of conventional software. In effect, large language models act as a virtual machine that is “programmed” in natural language. This talk will explore the implications of this prediction, drawing on recent research into the cognitive and task execution capabilities of large language models.

Duration: 60 minutes (including audience Q&A)

Matt Welsh, CEO,
Matt Welsh is the CEO and Co-founder of, a Seattle-based startup developing a new computational platform with AI at the core. He was previously head of engineering at OctoML, a software engineer at Apple and, engineering director at Google, and a Professor of Computer Science at Harvard University. He holds a PhD from UC Berkeley.

Oana Olteanu, Partner, SignalFire
Oana is a ML engineer with operating experience in enterprise software and investment experience at the Seed, Series A and Series B stages. Prior to joining SignalFire, Oana was at Scale Venture Partners where she invested in applied ML and developer tooling. She sourced Scale’s investments in, Flatfile, and Proscia. She was part of the deal teams for Honeycomb and AllyO (acquired by HireVue). Prior to VC, Oana worked at SAP where she contributed to the development of the backend systems used by SAP for its solution portfolio.

The talk starts with FORTRAN as a negative example, showing that code is generally difficult to understand and maintain.

But are AI easier to understand? (what happens inside?)

Right, I agree that they are more difficult to understand for the time being.

As far as I understand, the speaker may want to make the point that writing/reading/maintaining code is a relatively difficult job for most humans while being naturally an easy job for today’s AI and even more for tomorrow’s. I may have misunderstood him, and I do not know whether his opinion is correct. Those who are interested may like to attend the talk or check the video.

1 Like

I definitely doubt about that : at least it cannot be such a general statement.

Try asking ChatGPT to write a Fortran code that transposes an arbitrary matrix in place :slight_smile:

1 Like

Here you are. :person_tipping_hand:

Prompt to ChatGPT4:

Try asking ChatGPT to write a Fortran code that transposes an arbitrary matrix in place


Transposing an arbitrary matrix in-place is not possible for non-square matrices, as it would require changing the dimensions of the original matrix. However, I can provide you with a Fortran code that transposes a square matrix in-place, and another one that transposes an arbitrary matrix into a new one.

  1. In-place transposition for square matrices:
program square_matrix_transpose
    implicit none
    integer, parameter :: n = 4
    integer :: i, j, temp
    integer, dimension(n, n) :: matrix

    ! Initialize the matrix
    matrix = reshape([(i, i = 1, n * n)], shape(matrix))

    print *, "Original matrix:"
    do i = 1, n
        print *, matrix(i, :)
    end do

    ! Transpose the square matrix in-place
    do i = 1, n
        do j = i + 1, n
            temp = matrix(i, j)
            matrix(i, j) = matrix(j, i)
            matrix(j, i) = temp
        end do
    end do

    print *, "Transposed matrix:"
    do i = 1, n
        print *, matrix(i, :)
    end do

end program square_matrix_transpose
  1. Transposition for arbitrary matrices (not in-place):
program matrix_transpose
    implicit none
    integer, parameter :: rows = 3, cols = 4
    integer :: i, j
    integer, dimension(rows, cols) :: matrix
    integer, dimension(cols, rows) :: transposed_matrix

    ! Initialize the matrix
    matrix = reshape([(i, i = 1, rows * cols)], shape(matrix))

    print *, "Original matrix:"
    do i = 1, rows
        print *, matrix(i, :)
    end do

    ! Transpose the matrix
    do i = 1, rows
        do j = 1, cols
            transposed_matrix(j, i) = matrix(i, j)
        end do
    end do

    print *, "Transposed matrix:"
    do i = 1, cols
        print *, transposed_matrix(i, :)
    end do

end program matrix_transpose

These programs first initialize a sample matrix, perform the transposition, and print the original and transposed matrices. You can change the dimensions and values of the matrices as needed.

I don’t think the “end of programming” looms.

FORTRAN reduced the need for assembly programing. SLATEC/LINPACK/EISPACK/LAPACK/IMSL/NAG etc. reduced the need to code basic numerical algorithms. Fortran 90 reduced the need to code basic operations with loops. LLMs, especially once they become integrated with compilers, will enable programming at an even higher level and improve productivity. I know little about C++ but know it is fast and has a big standard library. I wanted to speed up a Python program that ranks each element of a growing 1-D array. When I made the request

Given a C++ v of floats that is sorted in ascending order, and a scalar float variable, write a function that inserts the float so that v remains in ascending order, and return the position in which float was inserted.

and later asked it to generalize the code to arbitrary types it gave me the working code

template<typename T>
int insertSorted(vector<T>& v, const T& val) {
    // Inserts the element val into the sorted vector v, so that v remains in ascending order.
    // Returns the position at which val was inserted.
    auto pos = lower_bound(v.begin(), v.end(), val);
    int index = distance(v.begin(), pos);
    v.insert(pos, val);
    return index;

My colleagues who analyze data in Excel and format the results in Powerpoint are not going to start programming in Python and C++, but I think LLMs will help me do the analyses they want more quickly.

If I had a dime for every headline discussing “AI”, “ML” or now “LLM” based on naive acceptance of hype…

After following up on a number of these types of headlines, the wildly optimistic ones are invariably cherry-picked to create the illusion of dramatic progress where none exists.

I just came across this Forbes article, which accurately presents the real world of health care (which I know from professional experience) vs. what you would read in the headlines: Coming To Terms With The Healthcare Industry’s Inauthenticity Epidemic

Some quotes:

With years of hindsight, I now see in that health system what I see everywhere: an epidemic of inauthenticity and superficial execution.

In a field ostensibly dominated by scientific discovery and rigor, many results fail to meet the basic standards of a sixth-grade science teacher’s lecture on the scientific method.

There are far too many people who take at face value the company declarations of miraculous results—who learn far too late about the epidemic of normalized inauthenticity.
And, yet, there’s an even more significant issue with the industry’s fakery.
Which is—the bone-deep cynicism that it breeds.

Some case studies:

Though the Stat leaders seem to regard all this as something of an existential threat to the well-being of their profession, I view it as much worse than that. The problem is not that CS people are doing Statistics, but rather that they are doing it poorly: Generally the quality of CS work in Stat is weak. It is not a problem of quality of the researchers themselves; indeed, many of them are very highly talented. Instead, there are a number of systemic reasons for this, structural problems with the CS research “business model”

1 Like

Which is wrong :wink:

ChatGPT gave you solutions to simple problems you did not ask for, instead of the solution of the more complex problem.

I see good discussions and good points here. Maybe those who are interested could attend the talk and ask questions. The registration link is available at the very top of the post.

Disclaimer: I am not affiliated with ACM, I do not know the speaker, and I am not in favor of or against his talk.

1 Like

I didn’t want to claim that this was the answer you were looking for.
But given that simple prompt, that barely describes the task, there are many open questions how this should be solved. I think it’s valid for an AI to make the most simple assumption.
When your array has two dimensions, how could you change those dimensions in-place with Fortran?

I did not specify how the matrix should be represented. Actually I didn’t stop at the first answer, I tried helping it by specifying the 1D array storage. It gave me some code that wasn’t even compiling (*). I helped it again by correcting the wrong syntax, then the code was compiling but was producing wrong results. I helped it more by suggesting using cycles and pointing the wikipedia page on the topic, then it gave me some code that wasn’t compiling, with the exact same syntax error as before, I corrected again the syntax, and got compilable code, that was still producing wrong results… Then I gave up :slight_smile:

(*) it was using n % m for the integer division remainder. I suggested the Fortran function for that was mod, then it proposed n mod m. Finally I insisted it was a function, not an operator, and it got it… but soon forgot it.

By the way, the following should work to get a transposed representation if the matrix is originally in a 2D array:

real, target :: a(n,m)
real, pointer :: at(:,:)
call inplace_transpose(n,m,a,at)

subroutine inplace_transpose(n,m,a,at)
integer, intent(in) :: n, m
real, intent(in), target :: a(n*m)
real, pointer :: at(:,:)

at(1:m,1:n) => a(:)
! do actual transposition stuff on a(:)
end subroutine

Which version did you try? GPT3.5 or GPT4? Because when I corrected GPT4, it gave me a “not strictly in-place” solution, but it also said this:

True in-place transposition for arbitrary matrices without additional memory is a more complex task and often involves a complicated algorithm with the risk of data corruption if not done carefully.

So, I asked it to give me that complicated algorithm. It wasn’t working, but at least it knew the kind of algorithm usually used for this task.
I guess it’s not possible to solve this task for current available versions of GPT because the context size is not enough to fit the whole code.

I apologize for the confusion earlier. You’re right, it is possible to perform in-place transposition for arbitrary matrices using a cycle-based approach. Here’s a Fortran implementation of the in-place matrix transposition algorithm for arbitrary matrices:

module transpose_util
    implicit none
    function gcd(a, b) result(g)
        integer, intent(in) :: a, b
        integer :: g, temp

        g = a
        temp = b

        do while (temp /= 0)
            g = mod(g, temp)
            g = g + temp
            temp = temp - g
            g = g - temp
        end do
    end function gcd

    subroutine in_place_transpose(matrix, rows, cols)
        integer, intent(inout) :: matrix(:)
        integer, intent(in) :: rows, cols
        integer :: i, j, k, cycle_start, temp, next_index, prev_index

        do cycle_start = 1, gcd(rows, cols)
            j = cycle_start
            prev_index = j

            temp = matrix(j)

                next_index = mod(j * rows, rows * cols) + 1
                if (next_index == cycle_start) exit
                matrix(prev_index) = matrix(next_index)
                prev_index = next_index
                j = (j + cols - 1) / cols
            end do

            matrix(prev_index) = temp
        end do
    end subroutine in_place_transpose
end module transpose_util

program main
    use transpose_util
    implicit none
    integer, parameter :: rows = 3, cols = 4
    integer :: i, j, pos
    integer, dimension(rows * cols) :: matrix

    ! Initialize the matrix
    matrix = [(i, i = 1, rows * cols)]

    print *, "Original matrix:"
    pos = 1
    do i = 1, rows
        do j = 1, cols
            write(*, '(i3,1x)', advance='no') matrix(pos)
            pos = pos + 1
        end do
        print *
    end do

    call in_place_transpose(matrix, rows, cols)

    print *, "Transposed matrix:"
    pos = 1
    do i = 1, cols
        do j = 1, rows
            write(*, '(i3,1x)', advance='no') matrix(pos)
            pos = pos + 1
        end do
        print *
    end do
end program main

This implementation consists of two main parts: the gcd function that calculates the greatest common divisor of two integers, and the in_place_transpose subroutine that performs the in-place transposition.

The in_place_transpose subroutine uses a cycle-based approach, iterating over the elements of the matrix in cycles, and swapping elements along the cycle until the cycle is complete. The cycle starts at a specific position, and the algorithm follows the cycle until it reaches the starting position again. The number of cycles to iterate is determined by the greatest common divisor of the number of rows and columns of the matrix.

The main program initializes a sample matrix, prints it, calls the in_place_transpose subroutine to perform the transposition, and then prints the transposed matrix. You can change the dimensions and values of the matrix as needed.

1 Like

The Modular company, creator of Mojo, a faster Python replacement, has some essays relevant to this thread, such as Do LLMs eliminate the need for programming languages?

… The best and most fulfilling software developments I have ever participated in have always been with groups of talented and dedicated people.

In this context, one quickly realizes that the purpose of a programming language grows to be an abstraction for a human to express intent to another human about the behavior of a program. This still requires unambiguous specification but also shifts the goalposts – a language should be designed to be read, not just written by other humans. Computers are very tolerant and understanding (particularly with the rise of LLMs), so many of us benefit from clear design patterns and easy-to-understand code. Most code ends up being written once, but read and iterated on many times by many people.

The consequence is that overly clever syntactic sugar actually starts to cut against the core goals of the language. Specialized syntax and infrequently used features can make it difficult to understand for those who didn’t write it. While LLMs and other tools can help decode or explain overly complicated code, keeping a single, readable source of truth is ideal.

Modern Fortran is a good language for expressing scientific computing.

Quoting another of their posts, What’s the ideal programming language for LLMs to output?,

Those deep in software development find themselves surrounded by a zoo of different languages that aim to solve problems in various niches. You might have encountered Python for AI and data science, C and C++ for low-level programming, Javascript or Typescript for the web, Swift and Kotlin when building a mobile app, and CUDA for accelerator programming. These are all valuable languages, but given that LLMs reduce the need to care about how writable a syntax is – what qualities of a programming language matter in this new age?

We believe there are three fundamental aspects of a programming language that would make it particularly useful as we head towards an AI-assisted world – its usability and scalability to many domains, the amount of training data that exists, and a rich and vibrant ecosystem. Let’s take each in turn:

  1. The first most critical part of a language is the usability and scalability of the language implementation. The best language for an LLM is one that is highly usable and easy to read for humans, but whose implementation can scale to many different use cases and applications. Unfortunately, many language implementations include design decisions that preclude certain applications. For example, mark/sweep garbage collection isn’t ideal for low-level system software and accelerator programming, Python and other interpreted languages aren’t ideal when performance, parallelism, and threading are required, and JVM or .NET-based languages aren’t ideal for use cases that need small and low-dependence binaries.
  2. To train an LLM that is capable of producing high-quality programs across many different use cases and applications, we need an expansive corpus of training data that seeds the model. An LLM will work much better on a popular and established language like Python which has a large and diverse set of open examples, than a niche or novel language that has no existing code to train on.
  3. Lastly, we believe that a LLM needs a rich and vibrant ecosystem surrounding it. Even for existing LLM-based solutions, rich communities have already developed prompting libraries, tooling, and expertise enabling next-generation ecosystems to form. With this viewpoint, a language should be designed to unlock a massive community of developers – however we choose to define a developer in this new world, from traditional programming to instruction prompting and beyond.

Not surprisingly, they favor Mojo.

There is a problem with tools like ChatGPT: if people obtain answers generated by the AI model to their questions, will they continue to read your websites, your blogs, your codes?

Classical search engines were redirecting the reader. But it seems AI engines are not citing their sources. Classical search engines were finally good scholars, AI engines have very bad practices.

If they do not redirect the reader, will the authors still be motivated to publish online? Who wants to be read by a machine? Who wants to publish just to feed a machine?

But maybe it will therefore reinforce online communities like ours. Humans will join places where they are sure to be read by humans.

For programming, ChatGPT has been trained on StackOverflow among other sites, and SO is unhappy about the diverted traffic, so going forward Stack Overflow Will Charge AI Giants for Training Data.

Although there may be fewer direct readers of your content, if ChatGPT learns from it and produces answers using it, your content may indirectly reach more people. Programming language communities will be writing not just directly for members but also to teach LLMs about the language so that LLMs answer questions about the language well. Having LLMs that are uninformed about a programming language may be as serious a problem as search engines not indexing important sites about the language, a problem that has been discussed here.

1 Like

Thanks for the link. I will read it.

I understand your fatalist reasoning but I must admit I am really not excited by teaching machines.

1 Like

Yep, some, like the reader you refer to, are taking strange positions on generative AI such as

  1. everyone will be using it so let me help make sure it becomes really good as in my image!
  2. it can destroy us all and perhaps that is a good thing so let us feed it more and better so it can try to destroy us faster and we can find out sooner!

Yes, personally I am nor enthusiastic nor apocalyptic with AI engines. I rather feel like John Backus in its last 2006 interview:

So it’s a mixed bag.

But I don’t envy you, I’m afraid. I think that we’re getting more and more technological and less and less human oriented.”

See John Backus (1924-2007), Fortran's father - #5 by vmagnin for a more complete citation.

Probably I would have been fascinated when I was a student. But I am not. I just see the mixed bag.

About 45 five years ago, I used to play chess against a Sinclair 128KB machine. Most of the times I was the winner but for some reason I was neither excited nor “happy”.

After all these years, nowadays, I play against Stockfish. I always lose but I am so proud of this human achievement.

I am not afraid of imaging a world dominated by AI. Many things will become trivial, obsolete, and finally disappear such as politicians, military personnel, judicial officers (judges, lawyers, etc) public servants, economists, traders, businessmen, freelancers, private companies, public national and international organizations, etc who “all” are responsible for “corruption and injustice”, while other things will develop and blossom… such as (very) high (A)Intelligent human species, open public services (health, social, etc), individual education fit to the genes and personality of each one, …

1 Like