How many significant decimal digits for real single precision?

Based on IEEE 754, single precision should give us about 7 significant decimal digits. However, intrinsic function precision(x) gives only 6.

real :: x
x = 1.0 / 300000.0
print *, precision(x) ! print 6

what is the significant digits for single precision number?


For double precision you need 17 significant decimal digits to print it without losing any bits in the worst case: c - Why do I need 17 significant digits (and not 16) to represent a double? - Stack Overflow

The argument goes like this: If the smallest double that can be added to 1 is epsilon ~ 2e-16, then 1+epsilon = 1.0000000000000002, which requires 17 digits to represent.

For a single precision you probably need 8 decimal digits, because:

1 + 1.19e-07 = 1.0000001

Here is a code to test this:

real :: x, y
x = 1
y = 1.19e-7
print *, x, y
x = x + y
print *, x

Which prints:

   1.00000000       1.18999999E-07

So I would say you need 8 significant decimal digits to represent any single precision floating point number. If somebody finds a mistake in the above analysis, please let me know.

1 Like

The description of precision tells you exactly what it returns:

INT ((p - 1) * LOG10 (b))+k

where the intrinsic applies to the model for real. For single
precision on some/most systems, one has p = 24, b=2, and
k=0. The expression yields 6.

1 Like

IIRC, you actually need 12 decimals to reconstruct a single-precision real down to the last bit. You only have five to six significant decimals, though, but that is a different matter. More details can be found in the classic article by David Goldberg - What every computer scientist should know about floating-point arithmetic (there it says 11 decimals are required - if I interpret it right :slight_smile:)

@icpup, you may considering the binary32 format interchangeably with decimal32 in the context of IEEE 754.

With binary32 you will notice IEEE 754 model gives you 24 bits toward the precision of a floating-point representation.

And note binary32 aligns with the real model in the Fortran standard and working out the math toward decimal precision - which is what is yielded the Fortran standard intrinsic function PRECISION - you will get 6.

Arjen, not quite correct. IEEE754-2008 has

 For the purposes of discussing the limits on correctly rounded conversion, define
 the following quantities:

  for binary16, Pmin (binary16) = 5
  for binary32, Pmin (binary32) = 9
  for binary64, Pmin (binary64) = 17
  for binary128, Pmin (binary128) = 36
  for all other binary formats bf, Pmin (bf ) = 1 + ceiling( p × log10(2)), where
  p is the number of significant bits in bf

I would add that when you compute something with a numerical algorithm, whatever the precision and significant digits, you generally don’t know how many digits are exact in the result. In most cases, only the final digits are false (but how many?). And in the worse cases (happily often intentionally designed for catastrophic computation), zero digits are exact.

The Rump’s formula is terrible:
You can compute it with single precision reals, double precision, then use the 387 instructions set and you will have three different values, all false. You can’t deduce from your results nor the sign nor any digit, not even the power of ten…

(333 + 3/4) *b^6 + a^2 * (11 * a^2* b^2 - b^6 - 121 * b^4 - 2) +11/2 * b^8+ a/2b
with a = 77617 and b = 33096
You can also try other factorizations of that formula… and have fun…
Oh yes, the mathematical value is: \approx -0.8273960599468213


So 17 decimal digits agree with my above post. The 9 digits for single precision is one more digit from my lowest estimate (8). I gave an example (1.0000001) that requires 8 digits.

What is an example of a single precision number that requires 9 digits?

IEEE-754 2008 does not give examples that show the required number of digits.
I simply take IEEE-754 to be authoritative (at least the individuals who wrote it
know much more than I about floating point). I’ve never looked for an example,
but if I did, I would consider a real number that is exactly half-way between two
floating point values.

I should also note the cited digits are the number of (ascii) characters required in
a round-trip conversion from binary32 to a string and back to binary32 to recover the
original binary32 value.


Yes. So far I haven’t found an example. But I know that one can enumerate all single precision floating point numbers, so we can write a program to find it.

IMHO 6 decimal digits is maximum number guaranteed to be significant. Somewhat similar to range: 32-bits real min/max are 1.17549435E-38 3.40282347E+38 but the (full guaranteed) range of decimal exponent is 37.

From Wikipedia article Single precision FP format, citing literally Prof. W. Kahan’s paper Lecture Notes on the Status of IEEE Standard 754 for Binary Floating-Point Arithmetic:

This gives from 6 to 9 significant decimal digits precision.
If a decimal string with at most 6 significant digits is converted to IEEE 754 single-precision representation, and then converted back to a decimal string with the same number of digits, the final result should match the original string.
If an IEEE 754 single-precision number is converted to a decimal string with at least 9 significant digits, and then converted back to single-precision representation, the final result must match the original number.

Yes, I was imprecise. It seems 6 are absolutely necessary.

What I was after is how many digits you have to print (say to save to a file) so that when you read it, you don’t lose any accuracy. Kahan also says 9 for this use case.

So I think there must be a single precision number that requires 9 digits. We should find it.

Found it!

Here it is (thanks @kargl for the tip how to find it!):


That requires all 9 digits to print. If you only print 8 digits, and read it back, you get 1.00000036, which is a different number. Test code:

real :: x
x = 1.00000048
print *, x
x = 1.0000004
print *, x

This prints:


Conclusion: you need to print 9 digits for single precision and 17 digits for double precision to not lose any accuracy.


The final ‘8’ in 1.00000048 is not significant, I’m afraid. Change it to x=1.00000046 and you’ll still get 1.00000048 on output.

Slightly more involved example, which in principle can find all such numbers.

program m

   implicit none

   real x, y
   integer n0, n1
   character(len=20) s

   x = huge(x)
      n0 = transfer(x,n0)
      write(s,'(ES15.7)') x
      read(s,*) y
      n1 = transfer(y,n1)
      if (n0 /= n1) then
         write(*,*) n0, n1, x, y
      end if
      x = nearest(x,-1.)
      if (x < tiny(x)) exit
   end do

% gfcx -o z a.f90 -O && ./z
2097151999  2097151998   1.06338233E+37   1.06338227E+37
1 Like

Still confused. For binary32 you have maximum 24 bits to represent a real fraction in memory, which is equivalent to 24 * log10(2) ≈ 7.225 decimal digits or 7 significant digits. So here comes the questions:

  1. Why does fortran intrinsic precision function give 23 * log10(2)≈ 6.924 or 6 significant digits?
  2. where does the “8” or the “9” significant digit come from?

The number of decimal points for single precision is 6-9. What this means that single precision number can always represent at least 6 decimal digits. And that when you print it, you have to print at least 9 to not lose accuracy.

For double precision it is 15-17. So double can always represent at least 15 digits. And when you print it, you have to print 17.

It comes from the fact that 6 decimal digits is sometimes not enough to represent the number exactly. Not always, but sometimes you need 7, 8 or even 9. But not more than 9.

Computer math would be easier if we had 8 or 16 fingers. 10 does not happen to be a power of 2, so the same number of binary digits may require different number of decimals to represent the same value. Cf.
1000(2) = 8(10) and 1111(2) = 15(10);
1 0000 0000 0000(2) = 65536(10) and 1 1111 1111 1111(2) = 131071

To prove the “6” we would need to find a 7-decimal-digits number which converted to binary real value and back to string of 7 dec. digits yields a different 7th digit. This may be challenging :slight_smile:

1 Like

Additional factor contributing to the ‘6-9’ problem may be the exponent part. Binary exponent does not translate directly to decimal exp., so e.g. 1.23456 and 1.23456e33 will have completely different binary mantissas. And vice versa.

Here’s what we use in stdlib_io for preserving values in savetxt-loadtxt round-trips:

The discussion here seems consistent with it.