Computing at compile time

The compiler is not required to do anything, except somehow produce the correct output. The code in the called subroutine might not be executed if the modified variables do not affect the output.

Surely a pipe dream of mine due to excessive look at bugs reported on online forums when it comes to various derived type options and Fortran processors!

But if GCC/gfortran front-end can miraculously/magically be “bootstrapped” in modern Fortran, preferably 2018 standard revision as opposed to the current GCC C basis, chances are rather high many will make attempts “to be part of the solution” and invitations might not even be necessary. Or if the frontend were to be refactored to modern C++, say >=C++20, there might be takers.

If this growing Community on Fortran Discourse etc. over more than 18 months is not leading to more and more volunteers to the GCC/gfortran FOSS group, it might indicate either a reluctance to work with GCC C in conjunction with GCC workflow and/or the need for more tutorials and help manuals, etc. to lighten the ostensibly daunting nature of the task.

1 Like

Too bad @msz59 that you removed your comment, it contained very useful timing information and other time complexity comments. Would you mind restoring it? I wanted to implement the missing features in LFortran and try it and compare it.

If you want, just remove the “snail compiler” comment. I personally don’t think it’s offensive, but @kargl did not like it, so I think it’s fine to remove.

In this forum anyone is allowed to make comments, developers or not.

Speaking from personal experience, when you spend a lot of time contributing to open source, it’s easy to feel unappreciated when a lot of people post the negative things about a project that you spent years contributing to, and almost nobody says positive things, and only a few end up contributing and helping out. And developing a compiler such as GFortran is a monumental task, and even though I didn’t contribute to GFortran myself, I suspect you hear less “good comments”, and only hear when something does not work. But I think most people who didn’t contribute to open source projects before, or not to compilers, don’t realize that.

Long story short: let’s assume the best intentions of others, and let’s be appreciative of each others work.


I wish kargl did not get offended by what I am about to say.
I wanted to say to kargl, you are no doubt a very capable Fortran guy. In fact, many people here, perhaps all of us actually respect you very much. Without you the disclose will not be complete.
I am not qualified to comment on you (your Fortran expertise is way above me).
But perhaps you could try to think of it this way,
people are reporting bugs/issues (even if they did not fix it) simply because they are using gfortran and Fortran. In other words, it means you are doing a great job keeping people using gfortran and Fortran for so many years, right? That only means that, believe me, let me just say this, you did a HUGE job :100: :+1: :slight_smile: So the bottom line is, you can be very proud of yourself everyday, and you do not have to take anything with regard to bugs/issues personal.
If someday no one discuss bugs anymore, perhaps that will really mean something is wrong.


I have been using gfortran since it first appeared back in 2003 or 2004, earlier using g77 or, even earlier, f2c. I think gfortran is a great software and I appreciate everybody involved in this project. The idea that I’d want to take a jab at gfortran community because it is reletively slow at compiling some geek code is so absurd that I wanted to cut this discussion, introduced by unfortunate wording of mine (actually borrowed from the opposition of e-mail vs. snail-mail which I always considered harmless and funny).

I do not think it was of any considerable value, the code by @mecej4 is available, so everybody can easily repeat my test on his/her own machine. Still, I can put it here once again (done on an Intel E5-2650 v3, 256G RAM RockyLinux 8.5 machine):

          gfortran |  ifort 
N=1000       2.8   |   1.1 (so far so good)
N=2000      12.7   |  12.0
N=4000      58.7   | 133.0 (reaching 2.7g resident memory!)

BTW, To my great surprise (and as a warning!) I found that everybody has access to the full history of edits of any post, not only theirs own. That is really weird.


There is a recent thread 79 languages… regarding a benchmark of several languages in getting prime numbers less than 1 million. For this who do not follow it, there is a leading group of three (Lisp, C++, D) which outperform the remaining competitors by several orders of magnitude (!).
A comment by @Jweber puts those strange results into context of this topic by noting that the C++ winner uses constant expressions extensively, so there was not much to do at run time. I wonder how @Arjen-@mecej4 code would do in that benchmark but to check that, one would have to use nagfor, which, according to what we already know, is the only compiler possibly able to make it with N=10**6. Maybe @themos?
The bad news is that the C++ constexpr code (for N=10**6) compiles with GNU g++-11 in 7 seconds only.


Yes that would be an intersting tech-demo. I was not even aware that such possibilities exist in the Fortran language (althoung i assume it is not neccessarily a very good idea to use for large data sets in production code…).

Just two remarks concerning the competition.

a) These results where the compiler does the work are somewhat outside the competiton (I will go a bit more into detail in the 79 languages thread).
b) The results on the web page are not reported values but computed on a small set of systems via a script that in turn obtains its results from Docker images with the compilers/interpreters.
Because of this, free (as in beer and speech) compilers are a requirement and I guess the NAG compiler can not be integrated in the system.

But I do not really think that this is a problem. It is somehow “cheating” anyhow


The N=1,000,000 case is compiled by nagfor in 13 minutes, 19 seconds. Clearly, there is room for improvement.

999919 999931 999937 999941 999953 999959 999961 999979 999983 999997


@themos, what do you think the default behavior should be: currently nagfor will effectively “hang” for very large N, correct? While GFortran refuses to compile N beyond 64K, you have to pass in a special compiler flag.

What would I like as a user? I don’t know. It is nice knowing that the compiler will finish compiling any code (one way or the other) in a reasonable time. If the code is millions of lines, then it might take a while, but I think that is understandable. The example in this thread is just a few lines, but it can take arbitrarily long to compile. At the same time, in practice, this pretty much never happens, and when it happens (such as in this thread), we know exactly what we are doing, so it doesn’t seem a problem that it will take a while to compile.

Why is this “bad news”?

In the sense that the solution using compile-time calculated constant expressions in Fortran takes so much more time.

1 Like

Something went wrong, since some of the numbers reported are not prime.

999919 = 991 X 1009
999937 = 113 X 8849
999941 = 577 X 1733
999997 = 757 X 1321

1 Like

Yes, I realized last night that I was not updating rtN, I will try again with rtN=Int(Sqrt(N*1.0D0)).


nagfor revised times:
N=100,000 14 seconds
N=200,000 58 seconds
N=1,000,000 28 minutes 13 seconds

Primes are checked against FriCAS.


It is a quality-of-implementation issue and each vendor will take a view on how important it is for them to devote scarce resources on this or that and how difficult it is.

1 Like

I think the current restrictions make it impossible to initialise a series like Fibonacci numbers this way. The reason is that you need to have the results of a previous computation for the next one. I may be mistaken, but my experiments to achieve that have failed - the compilers I used rightfully determined that I was doing something illegal.

Here is a faster version of Arjen’s program for compile-time calculation of primes. This version is also faster (to compile and run) than the improved version that I posted two days ago, and is based on the following ideas (I describe the details for N = 1 million, sqrt(N) = 1000, int(sqrt(sqrt(N))) = 31:

  • Start with a seed set of primes, such as 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, all less than N^(1/4)

  • Using these seed primes as divisors, use Arjen’s PACK based algorithm to obtain primes between 31 and sqrt(N) = 1000.

  • Append these to the seed set to obtain a new and augmented set of primes < sqrt(N) = 1000.

  • Repeat the previous two steps to obtain a set of primes < N = 1 million.

Using NAG 7.0 on Windows (@themos, has the 7.1 version been released yet for Windows?), I observed a compilation time of 2.5 s, and a run time of 0.03 s (only printing is done at run time).

You may view the original algorithm as a special case of this new version with 2 as the sole member of the seed set, and a single, massive use of PACK to find all the remaining prime numbers < N.

Here is the revised program, but note that compilers may require special options or excessive CPU time and memory to compile and link this version.

program primesieve
    implicit none

    integer, parameter :: rtrtN = 31, rtN = 1000, N = 1000000
    integer            :: i, np
    integer, parameter :: candidates1(*) = [(i, i=rtrtN,rtN-1,2)]
    integer, parameter :: primes0(*)  = [2,3,5,7,11,13,17,19,23,29,31] ! primes <= rtrtN, used to seed calculation
    integer, parameter :: primes1(*) = &
       pack( candidates1, [(all(mod(candidates1(i),primes0) /= 0), &
          i = 1,size(candidates1))] )  ! newly identified primes < rtN
    integer, parameter :: candidates2(*) = [(i,i=rtN+1,N-1,2)]
    integer, parameter :: primes01(*) = [primes0, primes1]        ! primes <= rtN
    integer, parameter :: primes2(*) = pack(candidates2,[(all(mod(candidates2(i),primes01) /=0), &
                   i = 1,size(candidates2))])

    np = size(primes2)
    write(*,'(i5,A10,i4)') size(primes01),' primes < ',rtN
    write(*,'(i5,A10,i7)') size(primes01)+size(primes2),' primes < ',N
    print *
    print *,' Last 10 primes < ',N,' :'
    write(*,'(10I8)') primes2(np-9:np)

end program primesieve

If anyone wishes to modify this program, for instance to change N, they need to be careful not to introduce an even integer into the candidate arrays. Similarly, if a different seed set is used, make sure that no non-primes are present in that set!

I learned to pay attention to these points when attempting to change the current program to work for N = 10000 in order to test Gfortran and Ifort. The compilation times were 61 s for Gfortran 12, 1 s for Ifort and Ifx, for N = 10000.


I decided to check that hint and made a simple Python script reconstructing the original @mecej4 code, using lists. Indeed, the timing is pretty similar to what @themos found for nagfor. But then, I switched to Python sets and the results were amazingly fast.

   N       lists      sets
 100000      22.5     0.03
 200000    1:33.1     0.05
1000000   42:21.1     0.32
10000000    ???       4.4
Python lists code
import sys
if len(sys.argv) > 1:
    N = int(sys.argv[1])
mult=[i*j for j in range(3,rtN+1,2) for i in range (j,N//j+1,2)]
primes=[c for c in cand if c not in mult]
Python sets code
import sys
if len(sys.argv) > 1:
    N = int(sys.argv[1])
mult=set([i*j for j in range(3,rtN+1,2) for i in range (j,N//j+1,2)])

Shouldn’t we vote for intrinsic sets in Fortran? :slight_smile:


On my Linux machine, that takes 2 seconds whereas FriCAS’s

[i for i in 2…1000000| prime? i];

takes 10 seconds.


If you want a fast pure Python implementation:

In [8]: from sympy import sieve

In [9]: %time b = list(sieve.primerange(1000000))
CPU times: user 178 ms, sys: 3.2 ms, total: 182 ms
Wall time: 198 ms

But if you want code similar to the FriCAS snippet, you can do:

In [18]: from sympy.ntheory.primetest import isprime

In [19]: %time c = [i for i in range(2,1000000) if isprime(i)]
CPU times: user 2.18 s, sys: 3.22 ms, total: 2.18 s
Wall time: 2.18 s
1 Like