How many significant decimal digits for real single precision?

Based on IEEE 754, single precision should give us about 7 significant decimal digits. However, intrinsic function precision(x) gives only 6.

real :: x
x = 1.0 / 300000.0
print *, precision(x) ! print 6

what is the significant digits for single precision number?

2 Likes

For double precision you need 17 significant decimal digits to print it without losing any bits in the worst case: c - Why do I need 17 significant digits (and not 16) to represent a double? - Stack Overflow

The argument goes like this: If the smallest double that can be added to 1 is epsilon ~ 2e-16, then 1+epsilon = 1.0000000000000002, which requires 17 digits to represent.

For a single precision you probably need 8 decimal digits, because:

1 + 1.19e-07 = 1.0000001

Here is a code to test this:

real :: x, y
x = 1
y = 1.19e-7
print *, x, y
x = x + y
print *, x
end

Which prints:

   1.00000000       1.18999999E-07
   1.00000012    

So I would say you need 8 significant decimal digits to represent any single precision floating point number. If somebody finds a mistake in the above analysis, please let me know.

1 Like

IIRC, you actually need 12 decimals to reconstruct a single-precision real down to the last bit. You only have five to six significant decimals, though, but that is a different matter. More details can be found in the classic article by David Goldberg - What every computer scientist should know about floating-point arithmetic (there it says 11 decimals are required - if I interpret it right :slight_smile:)

@icpup, you may considering the binary32 format interchangeably with decimal32 in the context of IEEE 754.

With binary32 you will notice IEEE 754 model gives you 24 bits toward the precision of a floating-point representation.

And note binary32 aligns with the real model in the Fortran standard and working out the math toward decimal precision - which is what is yielded the Fortran standard intrinsic function PRECISION - you will get 6.

I would add that when you compute something with a numerical algorithm, whatever the precision and significant digits, you generally don’t know how many digits are exact in the result. In most cases, only the final digits are false (but how many?). And in the worse cases (happily often intentionally designed for catastrophic computation), zero digits are exact.

The Rump’s formula is terrible:
https://www.researchgate.net/publication/225180314_Rump%27s_Example_Revisited
You can compute it with single precision reals, double precision, then use the 387 instructions set and you will have three different values, all false. You can’t deduce from your results nor the sign nor any digit, not even the power of ten…

(333 + 3/4) *b^6 + a^2 * (11 * a^2* b^2 - b^6 - 121 * b^4 - 2) +11/2 * b^8+ a/2b
with a = 77617 and b = 33096
You can also try other factorizations of that formula… and have fun…
Oh yes, the mathematical value is: \approx -0.8273960599468213

2 Likes

So 17 decimal digits agree with my above post. The 9 digits for single precision is one more digit from my lowest estimate (8). I gave an example (1.0000001) that requires 8 digits.

What is an example of a single precision number that requires 9 digits?

Yes. So far I haven’t found an example. But I know that one can enumerate all single precision floating point numbers, so we can write a program to find it.

IMHO 6 decimal digits is maximum number guaranteed to be significant. Somewhat similar to range: 32-bits real min/max are 1.17549435E-38 3.40282347E+38 but the (full guaranteed) range of decimal exponent is 37.

From Wikipedia article Single precision FP format, citing literally Prof. W. Kahan’s paper Lecture Notes on the Status of IEEE Standard 754 for Binary Floating-Point Arithmetic:

This gives from 6 to 9 significant decimal digits precision.
If a decimal string with at most 6 significant digits is converted to IEEE 754 single-precision representation, and then converted back to a decimal string with the same number of digits, the final result should match the original string.
If an IEEE 754 single-precision number is converted to a decimal string with at least 9 significant digits, and then converted back to single-precision representation, the final result must match the original number.

Yes, I was imprecise. It seems 6 are absolutely necessary.

What I was after is how many digits you have to print (say to save to a file) so that when you read it, you don’t lose any accuracy. Kahan also says 9 for this use case.

So I think there must be a single precision number that requires 9 digits. We should find it.

Found it!

Here it is (thanks @kargl for the tip how to find it!):

1.00000048

That requires all 9 digits to print. If you only print 8 digits, and read it back, you get 1.00000036, which is a different number. Test code:

real :: x
x = 1.00000048
print *, x
x = 1.0000004
print *, x
end

This prints:

   1.00000048    
   1.00000036    

Conclusion: you need to print 9 digits for single precision and 17 digits for double precision to not lose any accuracy.

2 Likes

The final ‘8’ in 1.00000048 is not significant, I’m afraid. Change it to x=1.00000046 and you’ll still get 1.00000048 on output.

Still confused. For binary32 you have maximum 24 bits to represent a real fraction in memory, which is equivalent to 24 * log10(2) ≈ 7.225 decimal digits or 7 significant digits. So here comes the questions:

  1. Why does fortran intrinsic precision function give 23 * log10(2)≈ 6.924 or 6 significant digits?
  2. where does the “8” or the “9” significant digit come from?

The number of decimal points for single precision is 6-9. What this means that single precision number can always represent at least 6 decimal digits. And that when you print it, you have to print at least 9 to not lose accuracy.

For double precision it is 15-17. So double can always represent at least 15 digits. And when you print it, you have to print 17.

It comes from the fact that 6 decimal digits is sometimes not enough to represent the number exactly. Not always, but sometimes you need 7, 8 or even 9. But not more than 9.

Computer math would be easier if we had 8 or 16 fingers. 10 does not happen to be a power of 2, so the same number of binary digits may require different number of decimals to represent the same value. Cf.
1000(2) = 8(10) and 1111(2) = 15(10);
1 0000 0000 0000(2) = 65536(10) and 1 1111 1111 1111(2) = 131071

To prove the “6” we would need to find a 7-decimal-digits number which converted to binary real value and back to string of 7 dec. digits yields a different 7th digit. This may be challenging :slight_smile:

1 Like

Additional factor contributing to the ‘6-9’ problem may be the exponent part. Binary exponent does not translate directly to decimal exp., so e.g. 1.23456 and 1.23456e33 will have completely different binary mantissas. And vice versa.

Here’s what we use in stdlib_io for preserving values in savetxt-loadtxt round-trips:

The discussion here seems consistent with it.

5 Likes

No, the exponent is separate. As @milancurcic posted, the full format string for a double precision number is thus es24.16e3, that should always fully represent it.

How do you plan to use the precision information?

As you state, the intrinsic function returns int( (digits(x)-1)*log10(radix(x)) ) as the decimal precision which is a functionally reasonable estimate of the number of significant decimal digits in the floating-point representation. A trivial illustration, given the near-ubiquitous use of IEEE floating-point arithmetic with processors, being

   print *, "PI (binary32) = ", real( 4*atan(1D0) )
end

C:\Temp>a.exe
PI (binary32) = 3.141593

However for a lossless “round-trip” when it comes to precision, say during IO, adding 2 to the result returned by precision toward the d value for digits in data edit descriptors is practical though you need at least 7:

   character(len=20) :: s
   character(len=*), parameter :: fmts = "(es20.7)" !<-- Try with 6 for d
   character(len=*), parameter :: fmtb = "(g0,b0)"
   real :: x
   write( s, fmt=fmts ) real( 4*atan(1D0) )
   read( s, fmt=fmts ) x
   print fmtb, "x(binary) = ", x
   print fmtb, "Expected is ", real( 4*atan(1D0) )
end

C:\Temp>a.exe
x(binary) = 1000000010010010000111111011011
Expected is 1000000010010010000111111011011

I meant something different. The same pattern of binary digits translates, with a different binary exponent, to different set of decimals (not counting the decimal exponent), thus possibly a different number of significant digits needed to represent the value w/o loss of precision

1 Like

I know this is an old topic, but I think is important to finally answer the question. I was looking for the answer too.

1.00000048…(Mantissa=4, exponent=127) is not a correct example
printing it with 8 significant digits(7 after decimal separator) prints 1.0000005, which reads exactly as the same 1.00000048…(Mantissa=4, exponent=127)
The issue is not rounding a number with reading from or printing to a string, the issue is getting a different result after a round-trip (binary->string->binary).

Through randomly generated floats in some C code, printing it with %.7e and checking if I get the same number back I found a few examples.
one example is: 1.00000105e+01
1.00000105e+01
rounds to:
1.0000011e+01
reads as:
1.00000114e+01
rounding it down to:
1.000001e+01
reads as:
1.00000095e+01
So no matter how you round it the number will always be too high or too low. So this is an example of a single precision float that needs 9 decimal significant digits to survive a round-trip.

You can play with it here: IEEE-754 Floating Point Converter

For double precision I found the following:
1.0000000000000002
this one is more obvious.
rounding down to:
1
reads as:
1
and rounding up to:
1.000000000000001
reads as:
1.0000000000000011
So this is an example of a double precision float that needs 17 decimal significant digits to survive a round-trip.

1 Like