Can floating point literals be adapted by the compiler to double precision variable?

The crux of the issue is this: when it comes to the Fortran standard and the bearers, denial is a river in Egypt (!), perhaps a slow-flowing one. Thus things evolve too little and too late for the practitioners to be able to express the computations better and in safer manner in their Fortran code. The state is shameful and indefensible, the Fortran practitioners are simply not being served adequately. Pointing out issues in C++ or Python is beside the point.