This is true, that is why I put the word “literal” in quotes. This is as close as I know within fortran how to simulate the “infinite precison” behavior of literals that is often proposed in these discussions. In this case, the i/o library knows what is the kind of the target variable, so it can keep converting the decimal integers to bits until the target precision is attained (in practice, the last bit is sometimes rounded incorrectly, but that is a general feature of floating point, not of fortran). But as I said previously, that does not work in the other contexts, such as subprogram arguments and within expressions, so it cannot be done in a simple consistent way elsewhere within the language. All of these other situations would need their own set of conversion rules, and the programmer would need to learn that new level of complexity. Instead, the current fortran rule in all of these other (non i/o) cases is simple for the programmer: namely the value depends on the specified kind, it does not depend on context, and it is always the same.
This rule has changed over the life of the language. Prior to f90, there was some ambiguity in the language standard how, for example, literals in data statements were converted. This caused portability issues as different compilers adopted different conventions, and it resulted in inconsistencies even within some compilers, and this issue was eventually resolved to this current convention. There are also some ambiguities in the language standard about how expressions are evaluated that is closely related to this issue, but still distinct. For example, the compiler is allowed to use 80-bit registers to evaluate floating point expressions where the results can differ from the evaluations performed with 32-bit or 64-bit precision. Another example, the compiler is allowed to use fused multiply-add instructions where the results can differ from the combination of separate multiply and add instructions, or it can use simd vector instructions to evaluate blocks of operations that differ from the sequence of scalar operations. These differences are small, usually from different rounding conventions in the instruction or from different treatments of denormalized values. In contrast, treating literals as if they they had infinite precision or as if they had different kind values results typically in larger differences in the computed results.
I also should mention that the NAG compiler supports both the a16
and the a128
lines in the above code. In this case, the a16
value differs from the other assigned values because it cannot represent exactly the real32
value on the rhs of the statement. There are, of course, some decimal values that can be represented exactly by all floating point kinds, but 1.1
is not one of them.