Just curious: what’s the rationale to never using the double precision exponent? Is it just because as the name says, it doubles the precision of the default real kind, so not necessarily a 64-bit floating-point number, but could be 128-bit if 64-bit is the default real kind?
Or, there are other subtleties that would be worth knowing? ![]()