Complex arithmetic significantly slower? - not really

This probably explains the results I posted above.

Well, I never said GFortran is bad - if it was I wouldn’t use it. I was just wondering why I got poor performance in the complex plane, and I got the answer. When I see such a difference in performance I could suspect something is wrong with complex numbers arithmetic, and I clearly said this was just a GUESS - which turned out to be a wrong one. Does this justify a flat-out blatantly offensive first reaction? You should “speak up” if you so wish, but you could/should also be more civilized as well. Nobody came here to bash GFortran - definitely not me, since I am using it almost exclusively the last few years. I can’t thank enough the people behind the whole GCC project for their work, but that doesn’t mean I should never suspect something is wrong, or even questioning the way the whole project takes lately (but that’s another topic, and very controversial.)

Thank you for those results, I was wondering what’s going on with other compilers as well.

This is a lesson I learned years ago and that’s why I treat **2 like a taboo. But things are changing for the better, it seems.

This is just an example for my upcoming FGL, in fact it is just a “sub-example” showing the capabilities of the imaging “sub-library” I added recently. It was never intended to be a thorough fractals study, although it is certainly an interesting topic.

In fact when I saw the poor performance the first thing I thought is something is wrong with the imaging code (which maybe needed optimizations or was buggy - I hunted the “usual suspect”, the “big fish” first.) So I wrote the equivalent code in C for comparisons, focusing on how the imaging is done. But C doesn’t have built-in complex arithmetic (yes, I am aware of <complex.h>, but come on now). So my C implementation never had abs or sqrt since all I care for is if we escaped the small circle Mandelbrot is all about. When I saw the C code performing better I got angry. :laughing: Fun fact: when I was typing that abs in the Fortran complex-plane version of the code, I stopped for a moment thinking “maybe not abs?”. Then I thought “nevermind, it makes the code clearer”. Little did I know this would cost several hours of work to hunt down a bug in the imaging code - which was not there.

Thank you all for participating. This was productive. :slight_smile:
The discussion you people did about Mandelbrot set and how to further optimize the offending line in the code made me thinking of starting a whole fractal project. But I guess 0.1-0.5 seconds for a detailed fractal image in a “sub-example” of my library, using a 10-years-old laptop (which was never a good one to begin with) is… good enough for now. Otherwise that library would never be completed.

2 Likes

I would say that this has not been an issue in any fortran compiler since the 1960s. The more general way to compute integer powers involves extracting the bits from the exponent and recursively multiplying to get the final result. Even that is relatively cheap, especially for constant integer exponents where the compiler can see what is the highest nonzero bit.

The thing that is expensive is x**y where x and y are floating point. This is typically computed using log() and exp(). This is what the C language does with its pow() function, and it is why C programmers avoid it when possible and replace it with explicit multiplies. As I said, for integer exponents, this has not been a problem in fortran since the 1960s.

1 Like

You are probably right, I am only using compilers, never got involved in making one. But with an ancient Assembly and Pascal background (which didn’t even have a pow function back then) you get used to “powers means log”, so it’s a “terrifying” thing you should avoid - even if it’s just foo squared. Nothing worse than an old habit…

I beg to strongly disagree. You just quoted me, but you conveniently skipped the last part. I will make it bold in the full quote:

I am really sorry I have to say this but this convenient half-quote doesn’t give me an option:
Where is the “sweeping generalization” and the “attack to gfortran” is beyond me. But the flat-out uncivilized, bad manners are right in front of me when I read your first post. And the next ones don’t get much better…

Well, your… abrasive response (to put it lightly, it was far more offensive than that) provoked a response you obviously did not expect - but you should, unless you think you can do such things, and others will just accept such a behavior without answering.
The fact is I never said GFortran is bad, and I am thankful to the people behind it. That doesn’t mean I will accept every decision they take (especially lately) blindly nor it means I will accept bad manners.
At any rate, think what you wish, it doesn’t really matter. My question here was answered, plus it triggered an interesting discussion about further optimizations, which I very much enjoyed. I prefer to stay at that.

Sure… I did the test only for timing purposes with random data, not even the Mandelbrot algorithm. But in general yes, replacing the L2 norm by the L1 norm (or any other norm) has implications beyond the timing.

1 Like

This is the L0 norm L∞ norm… But it’s not faster: you are trading an addition for a second test. It can be a faster if the first test is triggered often enough.

I think those tests seem equivalent to Chebyshev distance (L_{\infty} metric):
for Mandelbrot we can use if (max(abs(z%re), abs(z%im)) > 2.0_wp)
and yes, there is no arithmetic => fast & no rounding.

Here 2.0 is sufficient, as the topological ball is a square parallel to the axes.

The graphical difference comes from the fact that the color outside the Mandelbrot set is generally computed from the number of iterations of the most inner loop. So it depends on the metric used in that test.

Concerning abs(z), I would finally say avoid it:

  • if it has a significant impact on performances (here it weights ~half of the most inner loop),
  • and you don’t need the Euclidean distance in another computation (here the square of the distance is sufficient),
  • and you are sure you will have no overflow (here we are just around a small circle) and no problems with underflow (here the origin is in the big heart of the set).

So in the general case use abs(z) unless you have a good reason to avoid it.

1 Like