Advent of Code 2023

Advent of Code is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like.

The first puzzles will unlock on December 1st at midnight EST (UTC-5). See you then!

Previous topics:

Previous leaderboards:

If you’d like to compete versus other Fortranners:

  1. Register/Login at
  2. Go to Leaderboard - Advent of Code 2023
  3. Enter the code: 1510956-4380811b

(@Carltoffel, I hope we can keep using your leaderboard?)


If you want an extra challenge this year, use LFortran and workaround (and report!) any current limitations that you find.


If you post your solutions on GitHub please use the topic tag aoc-2023-in-fortran and tag repos from previous years. There are 3 repos with the analogous 2022 tag and 5 from 2021.

1 Like

Yesterday, my workmates made fun of me because I wanted to use Fortran for AOC (again). Today, someone said “beat my 90 ms” (Rust). Well, my Fortran approach takes about 4 ms (for both tasks).
And they weren’t very amused, that even my shell/sed/bc approach (5+6 ms) is faster than their Python approach (16 ms). :joy:

PS: @ivanpribec of course we can! :slight_smile:


Such a catastrophic energy cost… 21st century programmers need to learn again how to use efficiently a CPU. They enjoyed too much comfort for too long with nowadays too powerful hardwares…


@vmagnin This is very true. Just limit the hardware resources and they will appreciate the fortran style.

1 Like

The absolutely amazing part to me is that if you just copy & paste the task into ChatGPT, it produces Python code, that I run, it fails, I copy & paste the error back, and it produces a working version that just works for the full input (task 1).

I then ask it to rewrite to Fortran, it fails to compile, I paste the error back, iterate twice. Then it compiles, but produces the wrong answer. I then manually modify the code (there was a tiny bug), now it works. The final code is here.

They ask not to use AI, so I am not going to submit it. It’s not the shortest code possible, probably not the fastest (it runs at 10ms for the full input on my macOS with -Ofast and GFortran, although still faster than the Rust version above), but I didn’t have to do anything myself.


The general structure of the ChatGPT program is similar to what a human would write, but it could benefit from some encapsulation. In particular the function scan.

I bet Awk programmers were done in 5 minutes today. I also chose to structure my program as a filter:

Spoiler alert - Day 1
        total = 0
            read(*,*,iostat=ierr) input
            if (ierr == iostat_end) exit
            nf = scan_digit(trim(input))
            nb = scan_digit(trim(input),back=.true.)
            total = total + (10*nf + nb)
        end do

Runs in 4 ms (or less) (compiled with -fcheck=all -g -O0).


I am curious about this. Are you using the free chatGPT (I believe GPT 3.5 based), or the paid GPT 4 version? I wonder what the quality difference is for tasks like programming.

Additionally, I concur with others here that my human-written fortran version runs in ~4-6ms. Since this is on a 2008 Core 2 Duo laptop processor, and doesn’t seem to change between gfortran -O0 with a bunch of debug flags and gfortran -Ofast -march=native, it seems safe to say that parsing 1000 lines just isn’t that hard.

1 Like

I don’t really like such kind of competitions, so I didn’t want to compete, but rather wanted to take @certik’s challenge. I must admit my sin, being busy lately I didn’t update LFortran for like two months, so it is about time to do so. AoC would be a nice excuse to shy away from an extremely boring task I have to do, and do something way more interesting instead.

Anyway, the problem is that the website doesn’t work for me. All I can see is a simple text menu on the top, some (readable) text squished far to the right (which seems to be a simple ad,) and a timer ticking at the bottom. I assume the timer is about the first event, but I don’t see anything confirming that. The vast majority of the webpage is blank, presumably because of my privacy settings, ad blocker, and privacy badger - which are always active, and they will stay like that no matter what. I can, however, see the previous years (kind of, it’s messy but readable) so I might as well use those.

Why I’m not surprised? :slight_smile:

Typical artificial “intelligence” behavior. In fact, you are lucky you got a working answer in the first place, even with erroneous results. I tried a lot of times (and I’m still trying from time to time, out of curiosity) mainly with Fortran and C. it usually takes a lot of iterations like “(1) no this is not correct - (2) you are right, here is a refined example (which of course won’t even compile)”. The typical procedure is to get a completely wrong example as an answer, and after several iterations you may or may not get a correct one. More often than not, the program snippet I get contains function callings with wrong number of arguments (or arguments that don’t even exist in a C library, for example.) Quite often it hangs up after several iterations, falling into an infinite loop: It gives the exact same answer again and again for an eternity, even though it says it’s a “modified version” of the previous one (which is not.)

And it’s not restricted to programming languages. Last experience of this was yesterday when I asked gpt to write a gnuplot script to do something I needed. I just wanted to save some time looking at the documentation. About 10 tries later it gave up saying what I want to do is not possible in gnuplot, which is wrong. There was a scrap of truth in its final answer, saying “unless you heavily modify your data files” - this indeed could be a way to do what I wanted, but it is not necessary because it can be done way easier. AI completely failed to give a code even remotely close to that.

I estimate I saved some time using AI in something like 30-40% of the cases I used it (and those were simple cases.) The rest was a waste of time, I could do what I wanted faster without it.

There is a reason Richard Stallman calls AI a “BS Generator” (I censored my self here.) All it is, at least for now, is a glorified search engine, admittedly with an impressive parser. And that’s about it. Whoever trusts AI is a fool. If it ever gives a correct answer (and it does, in some cases that are far from being the majority) it seems to be just a lucky hit. Whenever you use it, remember this maxim but paraphrased. That is, don’t trust and always verify.

1 Like

Yes, it can accelerate your work, but it is a work in itself to guide it toward what you want and juge as correct. Who else could juge? There is no ghost in the shell… The poor AI does not even care if its results are correct, 'cause there is nobody that could care: nobody to fear giving the wrong answer or be proud giving the right one! (no emotions) A very bad pupil…

Using such tools is like exploring a tree: it sends you on a branch that has a relatively high probability to be good, and you guide it toward a smaller (closer to what you want) branch, again and again, or you decide to go back to another branch because it’s the wrong way.

You do not program it with a programming language but you must guide it using natural language… And juge the results at every step.

chatGPT (or other similar AI engines) do not try providing the truth, but a plausible answer, based on some probabilities. Much like “BS” talks, which just have to be plausible.

1 Like

The note G of Ada Byron, written in 1843, says:

It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. In considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable.
The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.

Please come back, dear Countess!


Precisely. The question is, can it really accelerate your work, when the work to guide it takes more time? Unless you ask something very simple, guiding the AI is like holding its “hand” at every step, pretty much like a baby that just learned how to walk - and even worse than that, because the baby at least know what (s)he wants to do. Not to mention guiding a baby in his first steps is a funny thing to do by itself. Guiding the AI to finally give the correct answer you are looking for is not fun, it’s a chore.

In essence, we are playing Touring’s imitation game - which is certainly interesting, but it is not saving time for an actual work we need to do.

In very simple cases, when I used it as a search engine and nothing else, it did indeed saved me some time, because the work to guide it was minimal. For instance, I heard a French song in the radio which I liked, but I didn’t know who was he singer and what’s the name of the song. I asked GPT to give me this information, based on some words I was able to catch when the song was playing. And it did, saving me some time looking it up myself. In more complicated cases, however, it just wasted my time, with crappy answers again and again for an eternity, no matter the effort to guide it.

The answers are indeed “plausible”, in the sense they are usually well written (with English grammar and vocabulary better my mine, that is.) But what’s the point, when you know the answer is most probably wrong…

Ada Byron was a great mind (no wonder, given who hr father was) and she was right. Perhaps we ask too much from a machine. The fact is, people do that anyway.
AI clearly tells you not to trust its answers blindly, but only if you ask about that. Instead, every AI should have a warning banner with big fat letters, and that right after every single answer. But of course they will never do such a thing, because they sell a product, and you don’t add a banner saying “my product is not to be trusted”. And even if they did do that, I can easily picture a teenager ignoring the warning and trusting the answer without verifying, because all (s)he cares is to get the job done quickly so they can return to their mobile phone and its crappy games.

But I digress a lot. Back to the topic, using AI to compete in AoC is… not a good idea, to say the least. It is however, a good opportunity to see it failing once again. :laughing:

1 Like

Every teacher needs to test such tools to tell his/her students what they can expect and what they can not expect. I have started playing a little with Bing Chat. I have not yet discovered what I could do with it. But by practicing, you will understand how it works, what you can expect or not, how to guide it, etc. And you will know if it is worth spending your time with it or not, depending on what you need.

I have no doubt that people paid to write code fast will use such tools, when they need such or such subroutine, once they have learned how to use them efficiently and with critical mind. But if in your programming practice pleasure is important, you could be not really attracted. Although some pleasure can probably arise from mastering the beast.

But concerning translation, AI tools are useful and rather efficient, and I use them when I am not interested (no pleasure) in the process of translating a text.

The big problem of AI tools is that when you expect to gain billions of dollars, truth is not your primary value. You seem to be ready to promise not only the moon or Mars but why not Alpha Centauri or the conquest of the Milky Way. Meditating, critically, what said Pascal about its arithmetic machine, our Countess about the Analytical Engine, Turing and von Neumann, is more than ever necessary to keep a sane mind. We need deep roots, else we could skyrocket with each new invention.

Another great mind of the 19st century was Mary Wollstonecraft Godwin (known as Mary Shelley) who wrote Frankenstein; or, The Modern Prometheus. It was the century of electricity and some people were convinced it was the secret to create life (sure, nerves and neurons use some kind of electrical signals). The 21st century seems now to be the century of artificial neural networks and has its own Doctors Frankenstein. They promise too much wonders and cataclysms, until you are tired to hear them talk (saturating the medias is a strategy you can afford when you have billions).

I did know the second paragraph of the note G, but not the first one which is really adapted to our time. We should neither overrate nor undervalue those inventions, just watch them coldly.


That’s the way the page is constructed. I don’t think it has to do with ad-blockers. Anyways, you can hover the mouse above the rows to go to that day’s challenge:

As you move through the challenges an ASCII art is uncovered. Instead of clicking you can also go to: Increment the last digit to move to the next day.

I’m not in it for the competitive aspect either. I see it more for fun, just like some people like to solve sudoku or crossword puzzles while having their morning tea or afternoon coffee.

I shared this Dijkstra quote before, but think it’s worth repeating again:

To the economic question “Why is software so expensive?” the equally economic answer could be “Because it is tried with cheap labour.” Why is it tried that way? Because its intrinsic difficulties are widely and grossly underestimated. So let us concentrate on “Why is software design so difficult?”. One of the morals of my answer will be that with inadequately educated personnel it will be impossible, with adequately educated software designers it might be possible, but certainly remain difficult.

I suppose that Leslie Lamport would agree with this, when he says “coding is the easy part of programming.”.

An afterthought: If the cheap labor is to replaced by “cheap” GPU cycles (according to some news, Amazon Web Services bought 2 million A100 and H100 GPUs from NVIDIA), and we don’t learn to program at a higher level, I’m not sure what could improve. Their is also the question of climate costs of wide-scale AI adoption.


The ASCII art is not revealed when I hover the mouse over there. However if I click blindly around that area, I get Day 2’s challenge. Thank you, @ivanpribec!

I like the way the problem is presented, pretty much like D&D or “Schwarze Auge” (Black Eye) tabletop games. The solution itself, however, seems pretty straightforward. I guess the challenge is not how to implement a program solving the problem but rather how to make it running as fast as possible.

I wonder if someone will be insane enough to write the program in… Fortth or even worse, INTERCAL, or any of those programming languages designed to really hate the programmer. :laughing:

The ASCII art is uncovered over the course of the advent calendar as you solve the challenges.

To avoid such manual iteration, my ChatGPT-Fortran-generator is a Python script to generate Fortran codes by ChatGPT to perform tasks, compile them with gfortran, fix errors and warnings, and run them. ChatGPT is asked to code each task multiple times, because sometimes it produces code that fails to compile or gives wrong answers.

One could generalize the script to other compilers and programming languages.


Can you tell us more about what kind of codes you can achieve? And in how many iterations?

I guess the next step could be to use TDD: you write the tests first, then the script asks ChatGPT to write the main code, then verifies if the tests are passed, asking ChatGPT to improve the results, etc.

It would be partially similar to evolutionary optimisation (genetic algorithms). The fitness function could be the number of passed tests.