For double precision you need 17 significant decimal digits to print it without losing any bits in the worst case: c - Why do I need 17 significant digits (and not 16) to represent a double? - Stack Overflow

The argument goes like this: If the smallest double that can be added to 1 is epsilon ~ 2e-16, then 1+epsilon = 1.0000000000000002, which requires 17 digits to represent.

For a single precision you probably need 8 decimal digits, because:

1 + 1.19e-07 = 1.0000001

Here is a code to test this:

```
real :: x, y
x = 1
y = 1.19e-7
print *, x, y
x = x + y
print *, x
end
```

Which prints:

```
1.00000000 1.18999999E-07
1.00000012
```

So I would say you need 8 significant decimal digits to represent any single precision floating point number. If somebody finds a mistake in the above analysis, please let me know.