# Why does squaring generate a different result?

Oh no, I get that. It was specific to the real*real and real**2 (and other powers).

I have found several times that ‘simplifying’ the maths results in some ‘interesting’ results.

Isn’t that required by the standard? If I remember well, real**integer used to be treated as an exception to the general rule of converting integer to real in any other real (op) intreger expression, just because it should be done as repeated multiplication (plus one reciprocal in case of negative power), instead of taking logarithm, multiplication and exponent.

No. All the standard says is that the interpretation of x1**x2 is “Raise x1 to the power x2” and, “When a value of type real or complex is raised to an integer power, the integer operand need not be converted.”

It would be standard-conforming to convert the integer power to a real and then use logarithms to compute the result. I don’t know of any compiler that does this.

I think what @sblionel meant is that the optimization performed is to inline the multiplication, as if it were written as `x*x` in the fortran code in the first place, rather than to call a general procedure that computes the value. Some compilers do this inline optimization, i.e, avoid the procedure call overhead, for integer powers up to 8 or so. For example, `x**8` might be computed as `t=x*x; t=t*t; t=t*t;` with only three multiplications and no procedure call.

That general procedure most likely has two separate algorithms within it. For small integer powers, the result is computed recursively by looking at the bits of the integer exponent. When a bit is 1 then two multiplications are done (one for the recursive squaring of the intermediate, the other to accumulate the result), and when the bit is 0, then only one multiplication is done (for the recursive squaring). That is what that inline `x**8` code does above. When the integer power is large enough (i.e. few leading zeros), then it is cheaper to compute the result as `exp(k*log(x))` by scaling `x` to be near 1 (so the `log(x)` expansion converges quickly) and scaling `k` accordingly.

There is also a possibility, however slight, that not squaring made a real and significant error go into hiding. In other words, the changes made to the code could be the equivalent of converting “bad news” into “no news” by jailing the messenger.

1 Like

Oh, but I did mention it!

I think what @kargl meant is that when the `**` expression is inlined, it is inlined with the parentheses in order to maintain the same evaluation order. The original expression with multiplication was without the parentheses, thus the original expression and the inlined version might not necessarily agree since, as you pointed out, the compiler is free to evaluate it in either order in one case, but a specific order in the other.

There is a type of expression that some compilers allow as an extension to the standard, and compile without warning unless asked to, that may have been used in the simulation code that @garynewport is using. I have not come across an explanation of why this extension exists nor have read an explanation of when it is useful. Here is an example:

``````program ffcub
implicit none
real x,y,z
x = 12.0
y =  5.0
z = x**3+-y**3
print *,x,y,z
end
``````

Could you say which compilers?

ifort, for one, will associate across parens unless you add `-assume protect-parens`. The Fortran team argued with the back-end team a long time about this. I don’t know if the LLVM-based ifx is any different.