Computing at compile time

gfortran has finished: it five hours and sixteen minutes.

Intel Fortran oneAPI still going strong …

4 Likes

For added insight, comment out the WRITE statement and compare the times.

As I said in a post to the Intel forum about it, it is a silly program :slight_smile: And I appreciate gfortran offering a remedy right away for the limit.

1 Like

That is several orders of magnitude more memory than I would as a naïve programmer expect!

(The Intel compiler is still considering my program and it takes 1 GB of memory. I have no idea how much it usually takes, as compilation is usually quick and so I would not even bother to find out)

I have a teaching example which uses a 375,000+ word dictionary. The overall best way to get the ‘words’ into a data structure is reading the file. In Fortran I read into an array and use a binary search algorithm. In C# and C++ I use sets and set membership, but with both of these too I read the data in. Worst case scenario is 375,000+ lines of source code. Not worth trying to ‘compute’ at compile time in my opinion.

I hereby nominate Arjen as Leader of the PACK Intrinsic (is groepsleider better?).

Here is a slight modification of his program, with two speed-ups:

  1. Avoid even numbers in candidates, except for 2.

  2. Avoid putting numbers larger than N in the multiples array.

! primesieve.f90 --
!     Is it possible to build a prime sieve using compile-time computation?
!
program primesieve
    implicit none

    integer, parameter :: N = 1000
    integer, parameter :: rtN = 31 ! isqrt(N)
    integer            :: i, j
    integer, parameter :: candidates(*) = [2,(i, i=3,N-1,2)]
    integer, parameter :: multiples(*)  = [(( i*j, i=j,N/j,2), j=3,rtN,2)]
    integer, parameter :: primes(*)     = pack( candidates, [(all(candidates(i) /= multiples), i = 1,size(candidates))] )

    write(*,'(10I8)') primes

end program primesieve

This version should work with more compilers, while retaining the attribute of “all calculations done at compile time”.

This version, with N equal to 1000, compiled with GFortran 12 (Windows Cygwin64) in less than 3 s (i7-10710U CPU). Intel’s Ifort and Ifx compilers took less than 1 second to compile and run. For this value of N, the size of the multiples array in this version is 563, whereas Arjen’s original version had it at nearly one million!

7 Likes

So now when this is done, what’s the next challenge for compile time computing?

Wonderful, that probably means the compile time will be reduced to more reasonable durations. I very much appreciate the efforts of compiler writers and I cannot even begin to appreciate the complexity of compilers. I said it elsewhere: programming seems to be rather chaotic, in the sense that a slight variation to a computational problem may result in very different code (or behaviour). Who would have anticipated that compile-time computing requires so much resources.

(Oh, and I gladly accept the nomination :slight_smile:)

Complexity seems to have dropped to O(N**2), for nagfor at least.
N=100,000 in 8 seconds.
N=200,000 in 32 seconds.

3 Likes

removed (see below)

2 Likes

Among the arrays in the program, the size of multiples increases the fastest. Asymptotically, it grows as M = N.log(N). (This array is not as compact as it could be; it contains duplicates such as 81, which equals 9 X 9 as well as 3 X 27.)

A compiler could sort this array before scanning, stripping out duplicates, and then scan the sorted array using binary search for each candidate. The complexity of these operations may be estimated as M.log(M), or N [log(N)]^2.

1 Like

removed, see below

I have no plans whatsoever. And I do not think I am obliged to have. If you allow only potential developers to make comments, this whole forum should better be deleted. I am removing my posts now that you consider them so offending.

1 Like

Is a compiler required to evaluate constant expressions at compile time? “Silly programs” aside, if I have, for example,
iAny([SWP_FRAMECHANGED,SWP_NOMOVE,SWP_NOSIZE,SWP_NOZORDER])
(where the SWP_s are parameters) as an argument in a function call, can I expect the compiler to pass a constant evaluated at compile time or is it at liberty to generate code that evaluates a value at run time? I had a quick look at the standard but couldn’t see anything that mentioned it.

The compiler is not required to do anything, except somehow produce the correct output. The code in the called subroutine might not be executed if the modified variables do not affect the output.

Surely a pipe dream of mine due to excessive look at bugs reported on online forums when it comes to various derived type options and Fortran processors!

But if GCC/gfortran front-end can miraculously/magically be “bootstrapped” in modern Fortran, preferably 2018 standard revision as opposed to the current GCC C basis, chances are rather high many will make attempts “to be part of the solution” and invitations might not even be necessary. Or if the frontend were to be refactored to modern C++, say >=C++20, there might be takers.

If this growing Community on Fortran Discourse etc. over more than 18 months is not leading to more and more volunteers to the GCC/gfortran FOSS group, it might indicate either a reluctance to work with GCC C in conjunction with GCC workflow and/or the need for more tutorials and help manuals, etc. to lighten the ostensibly daunting nature of the task.

1 Like

Too bad @msz59 that you removed your comment, it contained very useful timing information and other time complexity comments. Would you mind restoring it? I wanted to implement the missing features in LFortran and try it and compare it.

If you want, just remove the “snail compiler” comment. I personally don’t think it’s offensive, but @kargl did not like it, so I think it’s fine to remove.

In this forum anyone is allowed to make comments, developers or not.

Speaking from personal experience, when you spend a lot of time contributing to open source, it’s easy to feel unappreciated when a lot of people post the negative things about a project that you spent years contributing to, and almost nobody says positive things, and only a few end up contributing and helping out. And developing a compiler such as GFortran is a monumental task, and even though I didn’t contribute to GFortran myself, I suspect you hear less “good comments”, and only hear when something does not work. But I think most people who didn’t contribute to open source projects before, or not to compilers, don’t realize that.

Long story short: let’s assume the best intentions of others, and let’s be appreciative of each others work.

6 Likes

I wish kargl did not get offended by what I am about to say.
I wanted to say to kargl, you are no doubt a very capable Fortran guy. In fact, many people here, perhaps all of us actually respect you very much. Without you the disclose will not be complete.
I am not qualified to comment on you (your Fortran expertise is way above me).
But perhaps you could try to think of it this way,
people are reporting bugs/issues (even if they did not fix it) simply because they are using gfortran and Fortran. In other words, it means you are doing a great job keeping people using gfortran and Fortran for so many years, right? That only means that, believe me, let me just say this, you did a HUGE job :100: :+1: :slight_smile: So the bottom line is, you can be very proud of yourself everyday, and you do not have to take anything with regard to bugs/issues personal.
If someday no one discuss bugs anymore, perhaps that will really mean something is wrong.

5 Likes

I have been using gfortran since it first appeared back in 2003 or 2004, earlier using g77 or, even earlier, f2c. I think gfortran is a great software and I appreciate everybody involved in this project. The idea that I’d want to take a jab at gfortran community because it is reletively slow at compiling some geek code is so absurd that I wanted to cut this discussion, introduced by unfortunate wording of mine (actually borrowed from the opposition of e-mail vs. snail-mail which I always considered harmless and funny).

I do not think it was of any considerable value, the code by @mecej4 is available, so everybody can easily repeat my test on his/her own machine. Still, I can put it here once again (done on an Intel E5-2650 v3, 256G RAM RockyLinux 8.5 machine):

          gfortran |  ifort 
N=1000       2.8   |   1.1 (so far so good)
N=2000      12.7   |  12.0
N=4000      58.7   | 133.0 (reaching 2.7g resident memory!)

BTW, To my great surprise (and as a warning!) I found that everybody has access to the full history of edits of any post, not only theirs own. That is really weird.

4 Likes

There is a recent thread 79 languages… regarding a benchmark of several languages in getting prime numbers less than 1 million. For this who do not follow it, there is a leading group of three (Lisp, C++, D) which outperform the remaining competitors by several orders of magnitude (!).
A comment by @Jweber puts those strange results into context of this topic by noting that the C++ winner uses constant expressions extensively, so there was not much to do at run time. I wonder how @Arjen-@mecej4 code would do in that benchmark but to check that, one would have to use nagfor, which, according to what we already know, is the only compiler possibly able to make it with N=10**6. Maybe @themos?
The bad news is that the C++ constexpr code (for N=10**6) compiles with GNU g++-11 in 7 seconds only.

3 Likes