I struggled coding the day12 problem in Fortran. The initial issue was to create an adjacency matrix from the input file. I guess this is a lack of my understanding of Fortran. I had to first read the input string into a character array and then rename all occurrences of each cave-name with a number. And then I went through the renamed strings and created an adjacency matrix from it. For e.g I renamed start-A, A-end as
1-A, A-end then 1-2, 2-end and then 1-2,2-3. And then from this string constructed the adjacency matrix. I guess I missed something easier.
In a programming language with a map on strings (like C++) this will be trivial to do.
Another day with populations and growth, the second part of day 14 was well expected.
This time I decided to use a language without actual arrays, so no matrix algebra available (there are not even minval and maxval equivalents in awk). But awk has multidimensional, associative arrays (~dictionaries), which made solving this problem quite interesting and fun. I think the main loop is quite expressive:
for(kk in input) {
new = grow[kk]
split(kk, pair, SUBSEP)
output[pair[1],new] += input[kk]
output[new,pair[2]] += input[kk]
}
When counting the species in the end, one can do the same iteration, but accumulating the count from the pairs in the individual species. Only thing I miss in awk that I canât iterate at once over a several indices directly but only get the composite index, however splitting the composite index is still convenient enough.
I kind of like the idea of having a multidimensional dictionary built-in to the language, I wonder how hard it is to emulate this in other languages?
I agree that a dictionary feature is very useful. Of course, I implemented my own in Fortran for part 1, but while it worked for part 2, it was way, way too slow because the dictionary was single-character, and I didnât index pairs in the current polymer. Had to refactor it.
I also felt the need for a dictionary, for the second time. Since the input was only alphabets (and that too uppercase), I managed this one with 2d arrays of 26,26.
Mine was 2min10 seconds with Dijkstra. I did not use a priority queue datastructure, Colleagues are reporting better times with a heap and with the A*Algorithm
My part 2 time was around two minutes with Dijkstra. For part 2, I wrote a small function to fake the risk factor for cells outside of the dataset, adjusting for tile. I would have finished this sooner if I hadnât misread the instructions. Fortran was well suited to this one (MINLOC with MASK again.)
Day 18 wasnât hard, just time-consuming. I made incremental approaches, checking as I went. Once I got part 1 done, part 2 was a doddle (though inelegant - it seems most everyone ended up doing the same brute force, which completes quickly enough.) No fancy Fortran here, just pointers and recursion.
Took me a bit of time. Took some time to realise that the explosions always happen before any splits.
I used a binary tree. Using pointers in Fortran was a pleasant experience to my surprise.
type :: packet_type
integer(ip) :: version = -1
integer(ip) :: type_id = -1
integer(ip) :: length_type_id = -1
integer(ip) :: value = -1
type(packet_type),dimension(:),allocatable :: subpackets
contains
procedure :: parse => parse_packet
procedure :: evaluate => evaluate_packet
end type packet_type
which I then can evaluate easily for part 2 with this recursive method:
recursive elemental function evaluate_packet(me) result(val)
!! part b
implicit none
class(packet_type),intent(in) :: me
integer(ip) :: val
select case (me%type_id)
case(4); val = me%value
case(0); val = sum(me%subpackets%evaluate())
case(1); val = product(me%subpackets%evaluate())
case(2); val = minval(me%subpackets%evaluate())
case(3); val = maxval(me%subpackets%evaluate())
case(5); val = merge(1,0,(me%subpackets(1)%evaluate() > me%subpackets(2)%evaluate()))
case(6); val = merge(1,0,(me%subpackets(1)%evaluate() < me%subpackets(2)%evaluate()))
case(7); val = merge(1,0,(me%subpackets(1)%evaluate() == me%subpackets(2)%evaluate()))
case default
error stop 'oops: unknown packet id'
end select
end function evaluate_packet
This is the first time Iâve used a recursive allocatable structure like this. I did notice that an auto lhs reallocation was crashing ifort and gfortran:
Day 21 was a nice experience for me. At first I couldnât imagine a way to overcome the exponential growth, but after grabbing pen and paper and doing some quick maths I realised I can drastically reduce the number of parallel universes.
My program runs roughly two seconds.