Computing at compile time

Wonderful, that probably means the compile time will be reduced to more reasonable durations. I very much appreciate the efforts of compiler writers and I cannot even begin to appreciate the complexity of compilers. I said it elsewhere: programming seems to be rather chaotic, in the sense that a slight variation to a computational problem may result in very different code (or behaviour). Who would have anticipated that compile-time computing requires so much resources.

(Oh, and I gladly accept the nomination :slight_smile:)

Complexity seems to have dropped to O(N**2), for nagfor at least.
N=100,000 in 8 seconds.
N=200,000 in 32 seconds.

3 Likes

removed (see below)

2 Likes

Among the arrays in the program, the size of multiples increases the fastest. Asymptotically, it grows as M = N.log(N). (This array is not as compact as it could be; it contains duplicates such as 81, which equals 9 X 9 as well as 3 X 27.)

A compiler could sort this array before scanning, stripping out duplicates, and then scan the sorted array using binary search for each candidate. The complexity of these operations may be estimated as M.log(M), or N [log(N)]^2.

1 Like

removed, see below

I have no plans whatsoever. And I do not think I am obliged to have. If you allow only potential developers to make comments, this whole forum should better be deleted. I am removing my posts now that you consider them so offending.

1 Like

Is a compiler required to evaluate constant expressions at compile time? “Silly programs” aside, if I have, for example,
iAny([SWP_FRAMECHANGED,SWP_NOMOVE,SWP_NOSIZE,SWP_NOZORDER])
(where the SWP_s are parameters) as an argument in a function call, can I expect the compiler to pass a constant evaluated at compile time or is it at liberty to generate code that evaluates a value at run time? I had a quick look at the standard but couldn’t see anything that mentioned it.

The compiler is not required to do anything, except somehow produce the correct output. The code in the called subroutine might not be executed if the modified variables do not affect the output.

Surely a pipe dream of mine due to excessive look at bugs reported on online forums when it comes to various derived type options and Fortran processors!

But if GCC/gfortran front-end can miraculously/magically be “bootstrapped” in modern Fortran, preferably 2018 standard revision as opposed to the current GCC C basis, chances are rather high many will make attempts “to be part of the solution” and invitations might not even be necessary. Or if the frontend were to be refactored to modern C++, say >=C++20, there might be takers.

If this growing Community on Fortran Discourse etc. over more than 18 months is not leading to more and more volunteers to the GCC/gfortran FOSS group, it might indicate either a reluctance to work with GCC C in conjunction with GCC workflow and/or the need for more tutorials and help manuals, etc. to lighten the ostensibly daunting nature of the task.

1 Like

Too bad @msz59 that you removed your comment, it contained very useful timing information and other time complexity comments. Would you mind restoring it? I wanted to implement the missing features in LFortran and try it and compare it.

If you want, just remove the “snail compiler” comment. I personally don’t think it’s offensive, but @kargl did not like it, so I think it’s fine to remove.

In this forum anyone is allowed to make comments, developers or not.

Speaking from personal experience, when you spend a lot of time contributing to open source, it’s easy to feel unappreciated when a lot of people post the negative things about a project that you spent years contributing to, and almost nobody says positive things, and only a few end up contributing and helping out. And developing a compiler such as GFortran is a monumental task, and even though I didn’t contribute to GFortran myself, I suspect you hear less “good comments”, and only hear when something does not work. But I think most people who didn’t contribute to open source projects before, or not to compilers, don’t realize that.

Long story short: let’s assume the best intentions of others, and let’s be appreciative of each others work.

7 Likes

I wish kargl did not get offended by what I am about to say.
I wanted to say to kargl, you are no doubt a very capable Fortran guy. In fact, many people here, perhaps all of us actually respect you very much. Without you the disclose will not be complete.
I am not qualified to comment on you (your Fortran expertise is way above me).
But perhaps you could try to think of it this way,
people are reporting bugs/issues (even if they did not fix it) simply because they are using gfortran and Fortran. In other words, it means you are doing a great job keeping people using gfortran and Fortran for so many years, right? That only means that, believe me, let me just say this, you did a HUGE job :100: :+1: :slight_smile: So the bottom line is, you can be very proud of yourself everyday, and you do not have to take anything with regard to bugs/issues personal.
If someday no one discuss bugs anymore, perhaps that will really mean something is wrong.

5 Likes

I have been using gfortran since it first appeared back in 2003 or 2004, earlier using g77 or, even earlier, f2c. I think gfortran is a great software and I appreciate everybody involved in this project. The idea that I’d want to take a jab at gfortran community because it is reletively slow at compiling some geek code is so absurd that I wanted to cut this discussion, introduced by unfortunate wording of mine (actually borrowed from the opposition of e-mail vs. snail-mail which I always considered harmless and funny).

I do not think it was of any considerable value, the code by @mecej4 is available, so everybody can easily repeat my test on his/her own machine. Still, I can put it here once again (done on an Intel E5-2650 v3, 256G RAM RockyLinux 8.5 machine):

          gfortran |  ifort 
N=1000       2.8   |   1.1 (so far so good)
N=2000      12.7   |  12.0
N=4000      58.7   | 133.0 (reaching 2.7g resident memory!)

BTW, To my great surprise (and as a warning!) I found that everybody has access to the full history of edits of any post, not only theirs own. That is really weird.

4 Likes

There is a recent thread 79 languages… regarding a benchmark of several languages in getting prime numbers less than 1 million. For this who do not follow it, there is a leading group of three (Lisp, C++, D) which outperform the remaining competitors by several orders of magnitude (!).
A comment by @Jweber puts those strange results into context of this topic by noting that the C++ winner uses constant expressions extensively, so there was not much to do at run time. I wonder how @Arjen-@mecej4 code would do in that benchmark but to check that, one would have to use nagfor, which, according to what we already know, is the only compiler possibly able to make it with N=10**6. Maybe @themos?
The bad news is that the C++ constexpr code (for N=10**6) compiles with GNU g++-11 in 7 seconds only.

3 Likes

Yes that would be an intersting tech-demo. I was not even aware that such possibilities exist in the Fortran language (althoung i assume it is not neccessarily a very good idea to use for large data sets in production code…).

Just two remarks concerning the competition.

a) These results where the compiler does the work are somewhat outside the competiton (I will go a bit more into detail in the 79 languages thread).
b) The results on the web page are not reported values but computed on a small set of systems via a script that in turn obtains its results from Docker images with the compilers/interpreters.
Because of this, free (as in beer and speech) compilers are a requirement and I guess the NAG compiler can not be integrated in the system.

But I do not really think that this is a problem. It is somehow “cheating” anyhow

2 Likes

The N=1,000,000 case is compiled by nagfor in 13 minutes, 19 seconds. Clearly, there is room for improvement.

999919 999931 999937 999941 999953 999959 999961 999979 999983 999997

2 Likes

@themos, what do you think the default behavior should be: currently nagfor will effectively “hang” for very large N, correct? While GFortran refuses to compile N beyond 64K, you have to pass in a special compiler flag.

What would I like as a user? I don’t know. It is nice knowing that the compiler will finish compiling any code (one way or the other) in a reasonable time. If the code is millions of lines, then it might take a while, but I think that is understandable. The example in this thread is just a few lines, but it can take arbitrarily long to compile. At the same time, in practice, this pretty much never happens, and when it happens (such as in this thread), we know exactly what we are doing, so it doesn’t seem a problem that it will take a while to compile.

Why is this “bad news”?

In the sense that the solution using compile-time calculated constant expressions in Fortran takes so much more time.

1 Like

Something went wrong, since some of the numbers reported are not prime.

999919 = 991 X 1009
999937 = 113 X 8849
999941 = 577 X 1733
999997 = 757 X 1321

1 Like

Yes, I realized last night that I was not updating rtN, I will try again with rtN=Int(Sqrt(N*1.0D0)).

2 Likes