Advent of Code 2023

It is interesting to consider that such tools can impact not only education but also this kind of competitions.

I have not experimented much. Some *.txt files in the repo show output for simulating pi using Monte Carlo and estimating Euler’s number using a Taylor series. ChatGPT can generate codes that don’t compile, give correct results, or give wrong results. The last case is the most problematic.

1 Like

Thanks, I did not see. It seems those small problems need typically 4 attempts which is reasonable.

Yes, TDD would be necessary for such cases.

No wonder someone wanted to bite the bullet and write a program to automate the tedious process to make AI work after several tries. However, quite often AI will only give you a code snippet and you are supposed to fill the details (which is ok, but I wonder how a python program will cope with that.)
In addition, AI may never find a correct solution. I’m sure I am not the only one who saw it failing several times in a row, then falling down the rabbit hole (the infinite loop I mentioned earlier - exact same answer, presented as a new one.)

Programs like that prove we are just playing some form of the “imitation game reversed” here. We know in advance the one we are “talking” with is not a human, it’s not intelligent no matter how we stress the term. And we know it probably won’t make it by itself. The problem is then shifted from “using AI to accelerate some actual work” to “find a good way to hold AI’s hand step by step so it can finally walk”.

As a search engine, AI can be quite useful. As far translation is concerned, a specialized AI is actually quite decent (but again, not to be trusted, you can’t just copy/paste its translation.) However as far programming is concerned, it is far away from being even that.

Have you ever tried a chess program that has “characters” available? It gives you the option to choose the playing style of your opponent. You can pick an offensive opponent, a defensive opponent, a queen hunter, a Sicilian Defense advocate, whatever. While playing, it gives you the impression you are dealing with an intelligent opponent who also has some character. But it’s all fake, it’s just pre-programmed to play a specific way. All it does is comparisons, and it does it quickly. No intelligence involved.
Same thing here. We have an advanced parser able to (usually) “understand” what we ask and compose a readable answer. But it’s all fake, it understood nothing because it has no intelligence to understand anything, and the answer is not to be trusted.

That is the good news, it won’t decide by itself (as it has no self) to register to the Advent of Code 2023 and ruin your pleasure!

Finally we can play the imitation game, and we like it although we are sadly playing alone. We have not yet found an E.T., so maybe we are looking for an A.I.

1 Like

Because some of you were curious what the capabilities of ChatGPT4 are.
I asked the GPT to write tests and the main program for task one of day one:

https://chat.openai.com/share/269d5bfb-9924-4b32-a7f7-f517dee93bee

BTW, in Python it can run its own code and directly verify it.

1 Like

It is impressive that it could work well on that subject that sounds rather original (therefore probably not present in the learning process like could be something about prime numbers for example).

And it does not seem perturbed (and not even “interested”) by the Elves story. All information useless seems to have been automatically filtered.

Advent of Code participants now look like Kasparov… for this 1st problem at least.

I’ve used AI to generate an image of what I will look like around day 20… seems pretty accurate!!

10 Likes

@jacobwilliams Great image, except it got the Fortran logo wrong!

1 Like

That was day 19 for me last year. And I still don’t know dynamic programming :man_facepalming:

1 Like

The answer is most definitely yes for me. It took less than 2 minutes to get a working solution in Python. I couldn’t have written it in such a short time frame, and I am good at Python. The Fortran solution took maybe 3 minutes, as I had to workaround some simple bugs. I would not be able to get it done in Fortran in 3 minutes either. It would probably take me 10 to 20 minutes. So for me, for this task, I got at least 4x speedup (possibly more).

2 Likes

In some ways chatGPT is kind of like reviewing PRs for me: I am still responsible for the final outcome, but it does most of the real work, I only check the solution and focus on figuring out proper ways to fix it if the approach is incorrect. For Day 1 it was a big help. I am sure for some of the more complicated problems it will fail miserably and I might need to actually do them myself.

Definitely “inspired” by this logo found in Bing:

Be careful with copyright! :smile: Although the fortran-tech.com site seems not online anymore.
I think using such a tool to generate your own company logo would be dangerous…

1 Like

Either the task was really simple, or you are using a much better AI than the one I have access to (I’ll definitely never pay for such a thing.)

But imagine it wasn’t @certik but a student who is just learning Fortran. They would need way more time and effort to write the program by themselves than your 10-20 minutes. But they will learn something in the process. If the AI can do it for them, they will learn absolutely nothing (unless of course they opt not to use it, which I doubt.) And they need to learn a lot of things.

It makes perfect sense for you to use the AI in this case. You won’t learn anything if you write the program yourself, because for you it’s just a simple task, you did something similar countless of times already. So it makes sense to just say “this is boring, write the program for me”. What about the student I mentioned though?

1 Like

As She said:

Its province is to assist us in making available what we are already acquainted with.

To be acquainted with or not to be acquainted with, that is the question…

How to teach students to use such meta tools, as they don’t even master the basics? At which level? How to prevent them to use such tools to avoid learning the basics? (classroom teaching is needed more than ever but the tendency to reduce public spending is reducing face-to-face)

If you find a way to do that for more than a tiny minority of them, let me know. Because I see no way to achieve this. I think it’s just human nature. If they have something that will save them a few steps, they will use it without a second thought. Even though they do need to take those steps themselves, without help.

2 Likes

That’s a different (but important) question, how to teach students. We should probably start a separate thread for that.

Chess engines are better than humans for the past 20 years. If you want to learn chess, you have to learn how to play yourself, not let chess engines play for you. But you can use chess engines to help you get better. Isn’t this similar? If you want to learn programming, do it yourself. But you can use AI to help you get better. And if students don’t want to learn programming, then maybe they should do something else?

2 Likes

I see your point, I might have something to say but my verbosity will make in just another long sheet of text, which won’t even be the first today. And like you said this is stuff for another thread.

Let me just say that I realized I liked Mathematics more than Physics (which, technically, was what I was studying) when I had that big moment of “wow, I actually proved that theorem concerning cubic splines all by myself”. I could have an excuse to use an AI instead (after all, I was studying Physics, right? why should I care about proving a theorem, I just need to use it). I could very well use that AI, if I had it handy, losing that “big moment” forever, and never realizing Numerical Analysis is actually my thing, not Physics.

2 Likes

day 2 completed: GitHub - dacarnazzola/Advent-of-Code-2023: advent of code 2023: https://adventofcode.com/2023

If they keep on with this string parsing for every question, I will probably end up making some little string parsing module with nice utility functions rather than rewriting custom parsing for each day.

This one runs in 1-2ms, interesting 1/10th the input still takes 1/3 the time as day 1 for me. It’s probably a little less efficient to build up the character(len=5) :: string rather than just keeping a first and last index, but doing that approach would require more toggling logic. In the beginning I was close to creating a “split on delimiter” function to return a derived-type array of strings, since then it’s an easy pattern of every other element being number or color.

1 Like

I don’t worry about speed unless it becomes a real problem. Typically my Fortran solutions for these puzzles are fast enough. I enjoy stretching myself and seeing which “modern” Fortran features I can make use of. The early days are fairly easy, but they get harder during the month and often require knowledge of some advanced mathematics thing I don’t know, so I end up going to the discussion board for a hint.

I also don’t strive to get them done as early in the day as possible for max points.

I’m visiting my mom for the next week, so don’t know how many of these I will be able to do, but I am taking my laptop. I use Intel Fortran for this, and now use IFX exclusively - it will be interesting to see if I encounter any bugs.

2 Likes