Why does squaring generate a different result?

I have the following code (and many similar examples):

e3 = 0.5048011_DP * ratioConvectiveRadiative * ratioConvectiveRadiative

Since ratioConvectiveRadiative is a value, I changed this to…

e3 = 0.5048011_DP * ratioConvectiveRadiative**2

Yet, this generates a different value?!?

Does anyone know why?
Should I be using a real for the power?

Hard to say w/o knowing the value of ratioConvectiveRadiative and the types used in the code.
One guess is that with ** operator the square should be computed first, with double multiplication the order may be different
How much “different” the result is?

1 Like

Unfortunately, the variance grows (then contracts) as the model progresses.

I can state that the value is double precision (real(DP)) and assume that the way the calculation is carried out differs in some way.

As @msz59 suggested, 0.5048011_DP * (ratioConvectiveRadiative * ratioConvectiveRadiative) should give the same result as the formula with the square.

With floating points calculations, multiplication and addition are not associative (in the general case):
(a \times b) \times c \ne a \times (b \times c)
And when a lot of computations are made, the difference can grow.

4 Likes

The compilers I am familiar with will treat real*real and real**2 the same. However, you must also consider operator precedence (see Doctor Fortran in “Order! Order!” - Doctor Fortran (stevelionel.com)) Since exponentiation has higher precedence than multiplication, it will always be done first, whereas with the expression that has two multiplications, the leftmost one may be done first and, depending on the magnitude of the values. this could very likely create small (or maybe not so small!) differences in the value.

3 Likes

An interesting question arouse in my mind. Is there a preferred way of computing a series of multiplications?

In the case of a series of addition, it is well known that we should first add the small values. For example \sum_{i=1}^n \frac{1}{i} should be computed from n to 1 to improve the final precision.

But with multiplications I can not imagine any rule, as the multiplication algorithms generally involve a mix of simple multiplications (one digit times one digit) and additions (see for example Reverse-engineering the multiplication algorithm in the Intel 8086 processor for the ancient 8086), like we learned as kids.

Do you know any rule about multiplication?

Although I can see that the order of multiplications is important if there is a risk of overflow.

In single precision the first one is OK, the second one will overflow (assuming the compiler does not change the order of operations):

print *, 1E37*1E-37*1E37

print *, 1E37*1E37*1E-37

But apart those limit cases?

Maybe one should sort the numbers from smallest to largest in absolute value and then multiply in pairs from the middle? So product(x(1:5)) can be computed as x(3)*x(2)*x(4)*x(1)*x(5)

In statistics one often maximizes the likelihood function, which is a product of the likelihoods of the individual observations. Stats packages typically maximize the log likelihood in order to avoid overflow. In this case the numbers to be multiplied are non-negative. In general one could store signs and the logs of the absolute values of the numbers to be multiplied.

Thanks @Beliavsky for these thoughts. I will think about it.

Concerning likelihoods, are they not expressed between 0 and 1? (no overflow possible in the product in that case, maybe underflow?)

That is correct. Some notes on MLE discuss the use of logs to avoid underflow when computing a product.

1 Like

It will be useful if OP can answer this first.

With the expression as shown in the original post with the object being a kind of precision of at least 12 (double precision), it will be interesting to see the differences.

No, a real power will not solve any accuracy problems, and it might well make them worse.

You do not show the declarations of the variables, which might be important in this case. Here is an example of how. If DP corresponds to real64, and if ratioConvectiveRadiative is real32, then there could be some small numbers in the range of ~1e-19 that when squared do not underflow but instead result in subnormal values. These values lose precision in the mantissa compared to normal real32 values. In this case, the **2 expression might have a different value than the left-to-right evaluation of the first expression, which would be equivalent to

e3 = (0.5048011_DP * real(ratioConvectiveRadiative,DP)) * real(ratioConvectiveRadiative,DP)

That conversion to real64 will eliminate the real32 problem in this range of values because of its larger exponent range.

As others have pointed out, there is usually no significant difference in results of multiplications in different orders. Each multiplication has a relative error of epsilon, so if you have N multiplications, then you can expect differences in the output as large as N*epsilon relative error and with a typical sqrt(N)*epsilon for large N. That is normal roundoff error. But exceptions to those typical results can occur when subnormals are generated, and one can in effect lose up to all the bits of accuracy.

Of course, when an actual underflow occurs, the result is set to zero, which means you lose not only the mantissa bits, but also the exponent information, and when an overflow occurs, then you also lose all information in both mantissa and exponent (you might get an INF or a NaN as a result). These cases are usually more easily recognized. The gradual underflow can be a little trickier to debug.

This comment by OP suggests it’s declared by the DOUBLE PRECISION statement in the Fortran code.

It isn’t just overflow you have to worry about regarding order of operation. If the operands are of significantly different magnitude, you can lose low-order bits doing the operations in a different order. Here’s a slide from my SC13 presentation Improving Numerical Reproducibility in C/C++/Fortran (supercomputing.org) that touches on this problem.

1 Like

@garynewport ,

As part of your PhD education, please consider a certain degree of additional independent effort toward numerical analyses as part of the computational aspects of your research and to show some care and empathy toward the readers here and their time by sharing your analyses with them along with your inquiries.

In the particular case you raise in the original post, the link below provides a rather simple-minded and basic test case you can possibly use to initiate such an analysis. With one particular compiler, you will notice the program output using this naive case is

C:\temp>ifort /standard-semantics /free p.f
Intel(R) Fortran Intel(R) 64 Compiler Classic for applications running on Intel(R) 64, Version 2021.8.0 Build 20221119_000000
Copyright (C) 1985-2022 Intel Corporation.  All rights reserved.

Microsoft (R) Incremental Linker Version 14.31.31105.0
Copyright (C) Microsoft Corporation.  All rights reserved.

-out:p.exe
-subsystem:console
p.obj

C:\temp>p.exe
ratioConvectiveRadiative     e3: Relative Diff.            e3: Multiply                  e3: Square
.2189980198138681E-57        .000000000000000              .2421032773421913E-115        .2421032773421913E-115
.2548044275764261E-01        .000000000000000              .3277436099640126E-03         .3277436099640126E-03
.1918996546890502E-02        .000000000000000              .1858954153476850E-05         .1858954153476850E-05
38.31498580882514            .000000000000000              741.0672666673336             741.0672666673336
1.038361721501835            .000000000000000              .5442740546651633             .5442740546651633
5.835343857074246            .000000000000000              17.18910236357420             17.18910236357420
.1599700715861577E-03        .000000000000000              .1291807408536215E-07         .1291807408536215E-07
1.000000000000000            .000000000000000              .5048011000000000             .5048011000000000
.6333989918148395            .000000000000000              .2025233152873325             .2025233152873325
.1602691328445320            .000000000000000              .1296641946190870E-01         .1296641946190870E-01
4977.569365018100            .000000000000000              12507051.39016093             12507051.39016093
1.317589988268590            .000000000000000              .8763566064510171             .8763566064510171
21040762.33595129            .000000000000000              223482352486495.2             223482352486495.2
.5530428804862865            .000000000000000              .1543966611231064             .1543966611231064
5.908473126946373            .000000000000000              17.62263400950475             17.62263400950475
.8475847340899456E-01        .000000000000000              .3626490504020506E-02         .3626490504020506E-02
1.000000000000000            .000000000000000              .5048011000000000             .5048011000000000
10836465506467.55            .000000000000000              .5927828063474341E+26         .5927828063474341E+26
1.782138665280060            .000000000000000              1.603257492230115             1.603257492230115
.8273397013478003E-08        .000000000000000              .3455318003640593E-16         .3455318003640593E-16
.2641173255211879E-05        .000000000000000              .3521389576986460E-11         .3521389576986460E-11
31130124.48386241            .000000000000000              489194997505328.2             489194997505328.2
105.1522137567817            .000000000000000              5581.579734340993             5581.579734340993
1.357402924031642            .000000000000000              .9301175808330086             .9301175808330086
.5721526701350716E-02        .000000000000000              .1652510207200166E-04         .1652510207200166E-04

There are questions readers would benefit from understanding in order to provide knowledgeable answers which can then lead to meaningful and educational discourse for years to come with this thread; they include a) what differences do you notice b) what is the code and what are the “control variables” such as the compiler information, options used, etc. and c) for the computational tasks at hand, what is your desired outcome (e.g., it can’t be bit-by-bit reproducibility, right?) but whatever that be, why so.

Click to see code
   integer, parameter :: NVALS = 25
   double precision :: ratioConvectiveRadiative(NVALS)
   double precision :: e3_1(NVALS)
   double precision :: e3_2(NVALS)
   character(len=*), parameter :: fmtd = "(g0,t30,g0,t60,g0,t90,g0)"
   call get_prng_vals( ratioConvectiveRadiative )
   call calc( ratioConvectiveRadiative, e3_1, e3_2 )
   print fmtd, "ratioConvectiveRadiative", "e3: Relative Diff.", "e3: Multiply", "e3: Square"
   do i = 1, NVALS
      print fmtd, ratioConvectiveRadiative(i), e3_1(i)/e3_2(i)-1D0, e3_1(i), e3_2(i)
   end do
contains
   elemental subroutine calc( ratioConvectiveRadiative, e3_1, e3_2 )
      double precision, intent(in)    :: ratioConvectiveRadiative
      double precision, intent(inout) :: e3_1, e3_2
      integer, parameter :: DP = kind(1D0)
      e3_1 = 0.5048011_DP * ratioConvectiveRadiative * ratioConvectiveRadiative
      e3_2 = 0.5048011_DP * ratioConvectiveRadiative**2
   end subroutine 
   subroutine get_prng_vals( vals )
   ! Generate pseudorandom values
      double precision, intent(out) :: vals(:)
      ! Local variables
      double precision :: x( size(vals), 3 )
      double precision :: vexp(size(vals))
      double precision :: vsign(size(vals))
      call random_number( x )
      vexp = nint( x(:,2)*10D0 )
      where ( x(:,3) > 0.5D0 )
         vsign = 1D0
      else where
         vsign = -1D0
      end where
      vals = x(:,1)**(vsign*vexp)
   end subroutine
end    

Try doing this if the compiler is f2003 or later;

print "(ES24.16E3)", &
    (0.5048011_DP * ratioConvectiveRadiative) * ratioConvectiveRadiative,&
     0.5048011_DP * (ratioConvectiveRadiative * ratioConvectiveRadiative),&
     0.5048011_DP * ratioConvectiveRadiative**2

The OP’s Fortran statement suggests that the program derived from a physical problem and that the physical data are known to at most 7 significant digits. Do the 3 values agree to 7 significant digits?
I also recommend using Google or duckduckgo on goldberg floating point to learn about its benefits and drawbacks

The following example shows some effect of progressive underflow when compiled with Gfortran, although fails abruptly.

real    :: a,b,aa,bb
integer :: i

  a = 0.1
  b = 1.e17
  do i = 1,6
    write (*,*) i
    aa = a/b
    bb = a*b
    write ( *,*) (aa*aa)*bb, aa,bb
    write ( *,*) aa*(aa*bb), aa,bb
    b = b*10.
  end do
  end

Results with Gfortran are:

           1
   1.00000013E-20   1.00000005E-18   1.00000003E+16
   1.00000013E-20   1.00000005E-18   1.00000003E+16
           2
   1.00000007E-21   1.00000003E-19   9.99999984E+16
   9.99999968E-22   1.00000003E-19   9.99999984E+16
           3
   9.99994604E-23   1.00000005E-20   9.99999984E+17
   1.00000009E-22   1.00000005E-20   9.99999984E+17
           4
   1.00052712E-23   9.99999968E-22   9.99999998E+18
   9.99999921E-24   9.99999968E-22   9.99999998E+18
           5
   9.80908895E-25   9.99999968E-23   1.00000002E+20    ! significant error
   9.99999921E-25   9.99999968E-23   1.00000002E+20
           6
   0.00000000       1.00000000E-23   1.00000002E+21    ! total error
   9.99999958E-26   1.00000000E-23   1.00000002E+21
1 Like

Sorry, been a tad busy.

@RonShepard, I’m not sure I referenced accuracy, I hope not. It’s more about the variance and why there might be one (however, read on :slight_smile: ). Also, I defined real(DP) as, I believe you suggested I did, which is integer, parameter :: DP = kind(0d0); which should be as double precision (if I understood your suggestion). I applied this across the whole model.

@FortranFan, might I suggest you need to get over the fact that I am doing my PhD; it seems to be causing you far too much angst. This is not something I need to solve, or spend considerable time on. The solution is simple; if the difference is significant, then I don’t square the values but leave as-is. The difference isn’t significant, but I have no need to have any difference; so, left as-is (except, note forthcoming statement). I posted out of curiousity. If that upsets you, stop reading my posts. :slight_smile:

@msz59, sorry for the long delay in replying. Each model run takes between 40 and 60 minutes and, as just stated, this was not a problem preventing me from moving on. I needed to get other aspects of the model resolved, as well as other factors in my life at the moment. The problem with providing the difference is that the data generated equates to 30000 lines of data, none of which is directly related to this calculation. I did plan on running the model and show you the graphical output, which showed the variance. However, I ran the model as-is, then went into the code and ran the code with all the relevant parts being squared, expecting the variance to re-appear in the graph - it didn’t!

If this is all confusing, sorry. Basically, I am now no longer getting variance. In improving the model, I believe I have corrected a hidden problem elsewhere that, maybe, showed up only when I squared the variables. I have a similar issue, where using the power of a variable generates an unexpected outcome but this time it causes the whole model to crash. I am about to test this out and, if this truly does result in a crash only when I make the change to using a power, I will share more details.

Sorry if my curiousity appears to have wasted anyone’s time. I do appreciate the replies and have found most of them fascinating.

Yes, @sblionel, I could not see why real * real would be any different to real**2 - hence my question. There is a distinct possibility that there is no difference and that an error elsewhere was just being brought out by the use of squaring. That doesn’t even make sense, so I continue on… :slight_smile:

1 Like

In case I wasn’t clear earlier, I’d expect most compilers to perform real**2 by doing real*real inline. However, given two multiplication operators and three operands, the standard allows a compiler to do either multiply first, unless you have parentheses around one of them. (Some compilers will not honor the parentheses by default, however.) Due to the way FP arithmetic works, you can indeed get different results depending on which multiply is done first.

I’ll also note that most compilers will treat an exponent of 2.0 as if it were an integer 2 - this is a classic optimization.

My issue is only that I have a relatively stable model that produces data well within error margins. Though the variations I have been experiencing do not take me outside those error of margins, I have maintained a relationship between my model and the one from the original author, who was a lead practitioner at NASA JPL. Where I know my alterations are improvements (due to latest research, corrections to obvious errors in the original code, better matching to observational data, etc) then I am happy to see variance. However, where I am simply making the code easier to read, then I’m not keen in seeing any variance. Do I need 7 sigificant digits? Nope. The real world error margins would allow considerable variation in the model. Right now my model is dealing within an area that has little to no observational data (I am mapping later model data to known observations, to check that the later model results are correct - with other restraints in place to inform my earlier model). I use double precision because that is where the original model had set most of its variables and doing so has not added too many overheads.

I hope that makes sense.