I wonder if selected_real_kind
was addressing the problem from the 60s, 70s and early 80s, that has disappeared by the time it became available in the standard and compilers.
Now, can it happen in the future that we will see more variety again? I think it can. We might have both half precision (real16?) as well as bfloat16.
What is nice about the selected_real_kind
approach is that it is hardware independent. You select the properties of floating point that you need, and your code should run on any (future) hardware. So the idea is good. But it’s pain to type (and remember!) the correct ranges for double and single precision.
So in practice, I think we need to do what stdlib
is trying to address with its kinds
module: stdlib_kinds.f90 – Fortran-lang/stdlib, where it exposes sp
, dp
, qp
kinds and we can add different kinds for half and bfloat in the future if needed.
It seems the intrinsic iso_fortran_env
is perhaps addressing exactly the same issue, but there seems to be confusion about the purpose of it. One approach is that it only determines the size of the floating point based on bits (such as 32 or 64), so it is (or will be in the future) unable to distinguish real16
and bfloat16
. Another approach is that iso_fortran_env
lists simple names for all commonly used floating point formats, and it will simply add bfloat16
if it is used in hardware a lot, as well as any other such format.
The way it can work then is that if a code uses real32
or bfloat16
(in the future) from iso_fortran_env
, it could ensure that exactly this type or a wider type will be used. Except I believe a compiler will return -1 if the exact kind is not present.
So it seems the best of both worlds is to have the kinds
module in stdlib, but instead of iso_fortran_env
, it would use selected_real_kind
with the proper ranges, which would ensure that say hp
(half precision) or bp
(bfloat16) is either that type, or a larger type, thus ensuring Fortran code keeps running.
Here is another issue: say you write code that works with single precision (just as an example) and that uses epsilon
and you tune the iterations and everything to work. Then the compiler ends up using double precision (just as an example), as permitted by selected_real_kind
. Will epsilon
suddenly drop from 1e-8 to 1e-16? If so, that can screw up the algorithm for example it can get a lot slower if you tuned it to just get single precision correctly. What is the way to fix this problem?