Can floating point literals be adapted by the compiler to double precision variable?

If they needed that precision, then their code would have been in error with various f77 compilers, too, they would not have needed to wait for f90 to become available. The f77 standard allowed the double precision variable to be initialized in the data statement with either the single precision value or the double precision value. If they required the double precision value, then the D exponent was required. As I said previously, I always avoided doing this because it was not portable or reliable. I don’t even remember exactly which compilers I used that did the silent promotion. It was simply a feature that I ignored completely.

On the other hand, I often did something like the following:

real*8     pi
parameter (pi=3.141592653589793238D0)
! or
data pi /3.141592653589793238D0/

This worked correctly when real*8 mapped to either single precision (e.g. on a Cray) or double precision (e.g. on a VAX). The type conversion, if necessary, was done by the compiler at compile time, not at run time.

With modern fortran, one would not do this, of course. The correct way now would be to use the kind parameter both in the declaration and in the literal constant, so they are always consistent.

real(wp), save :: pi = 3.141592653589793238_wp
! or
real(wp), parameter :: pi = 3.141592653589793238_wp

Aside from some exceptional cases, I would not use a data statement with modern fortran code, only with legacy code.

It only appears on the surface to be better than what preceded it but that’s not saying much.

“The current modern fortran type/kind approach” is half-baked.

And every round of incremental offerings supposedly to “help” the practitioners leaves the half-bakedness unaddressed. An immediate example is the ISO_FORTRAN_ENV intrinsic module introduced starting Fortran 2008 and the ill-defined named constants therein of REAL32, REAL64, etc. that now pollute codes.

It is so much so that it’s very difficult to write a good, general-purpose numerical library for some floating-point task - a supposed area of strength of Fortran - in a manner that is any better in the type/kind aspect than the REAL and DOUBLE PRECISION support included “officially” all the way back in FORTRAN 66.

This is an absolute shame.

F77 only supported two floating point types, REAL and DOUBLE PRECISION. If you had a program written with one and wanted to change it to the other, you had to rely on edit scripts or other external tools to do the thousands of substitutions.

With the modern fortran type/kind system, if the code is written appropriately, the programmer only needs to change a single kind value in a single line of code, and that value can then propagate throughout the rest of the code with no other source code changes.

That difference is more than just “on the surface”.

If your hardware supports more than those two floating point types, then F77 could not support those types in a standard way. Modern fortran can. That capability also is more than just “on the surface”.

1 Like

And the vast majority hardware in use by Fortranners still only support two kinds of floating point types and which is what most processors default to as REAL and DOUBLE PRECISION in the language. Consequently, the standard makes no demands on a processor to include anything other than the two kinds to conform which makes it practically unviable to develop library solutions with compact code without unnecessary duplication that can support anything other than the two kinds.

Readers should first take a look at the Introduction in this J3 “paper”, there are reps of vendors showing up at this org who truly want to do the least for the practitioners of Fortran, everything is “minus 100 points” as mentioned in the paper; any and all proposals from the practitioners are immediately and reflexively marked by the author of that paper, who never engages online, and also others on the org with “not seem useful enough for the complication”

By the same token, the entire edifice around SELECTED_*_KIND introduced starting Fortran 90 is not “useful enough for the complication”.

Ok, agree the Fortran code with default REALs and DOUBLE PRECISION declarations, especially the latter, look outright ugly and old and for that reason alone, I would go with defined KINDs for any code I pursue in anger. But that’s a separate cosmetic issue.

Considering how reluctant the vendor reps are to take on anything that the practitioners repeatedly request as being helpful and productive for them and the aversion bordering on indolence displayed in that org to work on anything that appears “complicated”, the type/kind system is one obnoxity they could have avoided. Or, should a group like that embark on such a path, at least take the few extra steps that would have made library development better. But no - they can’t be bothered to do so. That’s what makes it such a half-baked feature.

Say you have discovered the greatest floating-point expression ever as the answer to everything in

x++;

and you want to author a funcion in Fortran in a library solution. You can try

   integer, parameter :: K1 = kind(1.0)
   integer, parameter :: K2 = kind(1D0)
   ..
   generic :: func => func_k1, func_k2
..
contains
..
    function func_k1( x ) result(r)
       real(K1), intent(in) :: x
       real(K1) :: r
       r = x + 1.0_k1
   end function
   function func_k2( x ) result(r)
      real(K2), intent(in) :: x
      real(K2) :: r
      r = x + 1.0_k2
   end function

which is no different than

   generic :: func => func_s, func_d
..
contains
..
    function func_s( x ) result(r)
       real, intent(in) :: x
       real :: r
       r = x + 1.0
   end function
   function func_d( x ) result(r)
      double precision, intent(in) :: x
      double precision :: r
      r = x + 1D0
   end function

Anything other than above and you will be hard-pressed for ages to find any useful hardware support for.

And this is but one small case that shows the whole edifice around the type/kind system is much ado about nothing.

Then on top of all this, you consider the insanity with the literal constants in the standard, the topic of this thread, and you can see the half-baked nonsense with the type/kind system.

Hopefully Fortran 202Y will include some relief with the literal constants, at least it’s on the list for initial consideration. But the practitioners better not count the chicken before they hatch - those who work in committees and want to do the least find most inventive ways and reasons to deliver the least, watch out for the official 202Y publication and wait years for your “favorite” processor to implement, which may be never!

Which of the following statements are clearer to understand ?

real(wp), save :: pi = 3.141592653589793238_wp
real*8 :: pi = 3.141592653589793238
integer*8 :: num = 2^34

As for the first example, what is wp ?

The case of 8-byte integer constants can still be messy. Recently I have seen examples like 2_wp**34.
In little-endian, all integer constants could be stored as 8-bytes, even for routine arguments, but the world of computing is not that easy. 8-byte integers are much more common in 64-bit usage.

Constants as routine arguments, especially real, is a risky coding approach. If an interface definition is available, the compiler should be able to warn and fix any inconsistency.

It would not take much for a helpful Fortran compiler to provide assistance in this area, but apparently helpful is not a requirement in the standard.

And if not available, silently promoting literal real constants to double precision silently breaks the code.

Spoiler: there exist tons of Fortran codes (mostly legacy, but not only legacy) with implicit interfaces :slight_smile:

Although I tend to agree -at least partly- with the “half baked” status of the kind system, I also tend to be fed-up with this constant aggressivity directed towards the J3 commitee and related, whatever the topic…

1 Like

I agree entirely that literal constant actual arguments were risky in f77. When trying to write portable code, those argument mismatches were a constant source of errors, usually undetectable at compile time, and problematic to locate and correct at run time. But I don’t agree with that statement for f90+ because of the ability to specify the type/kind in a consistent way. It is that one extra level of abstraction that solves the problem. Now with modern fortran, it is possible to match exactly the actual and dummy argument types and kind. And if an explicit interface is in scope, then the compiler is required to catch those mismatches at compile time, making it much easier for the programmer to avoid such mistakes before an object file is ever created.

As for the compiler itself making “corrections” to such mismatches, for example by invoking copy-in/copy-out argument association, I’m not sure that is a good idea. I’m satisfied as the programmer making those corrections myself after being told of the error by the compiler. If copy-in/copy-out is the right way to fix the problem, that I’ll do it that way. If changing the argument declaration is the right fix, then I’ll do it that way. If a SELECT TYPE block is the right fix, then I’ll do it that way. But in any case, it just needs to be fixed once, and then all future compilations will be correct.

As for the problem of writing library codes in a generic way that adapts itself to various combinations of real and integer KINDs, I have to admit that this has been a more difficult problem than I expected. This could have been solved at the f90 stage if a standard macro preprocessor had been defined. That is how programmers solved this problem, in a nonstandard way, in the 1980s before f90. I used filepp to do this at that time (a c-like preprocessor with looping capabilities). However, that approach was never adopted by standard fortran. Now, there is a separate generics proposal in development, which is another approach that looks interesting, so maybe it will be solved that way. But in any case, it is a little surprising to me that this has taken so long to address in a standard way. The fortran type/kind approach seems like the correct first step, but then the second step never occurred.

I think Fortran has been designed to write applications, and it’s quite easy to use the same code and switch precision (say from double to single) by changing a single line. The design maybe can be improved, but it’s not bad in my opinion. At the very least it’s a great starting point.

But to write libraries that support multiple precisions at once is not easy and these are the current approaches:

  • duplicate the code
  • use some macro preprocessor
  • create a module that works with any precision, and it gets copied to the user application, and the user sets the required precision, but you can only use one precision at a time, not both

I think it would be helpful to make the precision “generic”, here is the proposal for this: Allow an intent(out) argument to adopt the same kind as an input(in) argument · Issue #128 · j3-fortran/fortran_proposals · GitHub

That last option might be a little too restrictive. I think the way the generics proposal works is that the programmer invokes the subprogram, the compiler looks at the actual arguments, and at that time the specific subprogram is compiled to match the arguments. Then if that subprogram is invoked again, the compiler looks to see if a correct compiled version has already been created. If so, then the previous compilation is associated, and if not, then a new compilation is created with the appropriate argument type/kinds. All of this argument matching and compilation is still done at compile time, not at run time. That seems to be open-ended as far as the programmer is concerned.

1 Like

@RonShepard yes, that is the gist of the proposal at Allow an intent(out) argument to adopt the same kind as an input(in) argument · Issue #128 · j3-fortran/fortran_proposals · GitHub.

21 posts were split to a new topic: Simple Generics

I mean… Not really. That’s the whole point of this thread. It is unexpected that ‘double precision :: x = 1.23456789012345’ isn’t going to store the full double precision value entered. Yes, the standard specifies the concept of default real and that it is context independent. No, that is not intuitive, and thus is instead unexpected.

1 Like

In C++, if you use double precision literals in mixed expressions, the lower accuracy type will be automatically promoted to the larger one. This means in a routine like this one,

float times3(float a)
{
    return a * 3.0;
}

the resulting assembly contains conversion instructions - cvt..... (here the output of clang 16.0.0 is shown, with no optimizations on):

.LCPI0_0:
        .quad   0x4008000000000000              # double 3
times3(float):                             # @times3(float)
        push    rbp
        mov     rbp, rsp
        movss   dword ptr [rbp - 4], xmm0
        movss   xmm0, dword ptr [rbp - 4]       # xmm0 = mem[0],zero,zero,zero
        cvtss2sd        xmm0, xmm0
        movsd   xmm1, qword ptr [rip + .LCPI0_0] # xmm1 = mem[0],zero
        mulsd   xmm0, xmm1
        cvtsd2ss        xmm0, xmm0
        pop     rbp
        ret

Unexpected right? To make sure the multiplication is performed in the same type as the input argument, you can specify the type explicitly with T( ), e.g.

template<std::floating_point T>
inline T times3(T x) {
    return T(3.0) * x;
}

// Explicit instantiations
template float times3<float>(float x);
template double times3<double>(double x);

I see a lot of people program sloppily in C++, forgetting to write their literals accurately like 3.0f or 3U. Not to mention how easy it is to confuse precisions in Python/Numpy, where np.float128 doesn’t actually guarantee the extra accuracy (see discussion under extended precision).

3 Likes

The crux of the issue is this: when it comes to the Fortran standard and the bearers, denial is a river in Egypt (!), perhaps a slow-flowing one. Thus things evolve too little and too late for the practitioners to be able to express the computations better and in safer manner in their Fortran code. The state is shameful and indefensible, the Fortran practitioners are simply not being served adequately. Pointing out issues in C++ or Python is beside the point.

What is the actual benefit of this stance though? In Fortran 202X, or whatever they end up calling it, when they add real16 to iso_fortran_env, real32 will still be the default real kind for nearly every compiler in existence. Thus, when someone makes a new variable real(real16) :: x = 1.2345678, it’s going to store the full precision available to the real16 type. Similarly, if someone instead types real(real16) :: x = 1.1, the compiler will attempt to store as close to 1.100000 as a real16 can. There is no benefit to the “default real” being anything other than the highest precision available to the language, or at least the highest precision supported by the target hardware. It’s beyond absurd at this point.

EDIT: Actually, it occurred to me - the benefit of this stance is that it allows Fortran to stay in a 32-bit oriented world. So, as all software and hardware moves to 64-bit (Intel talking about dropping 32-bit support at a hardware level in the future), the “default real kind” in Fortran will just not function at all. Brilliant.

1 Like

Once again you are confusing 32/64 bits architectures and 32/64 bits floating point format. These are completely unrelated. 64 bits floating point hardware support appeared on the x86 architecture long before it evolved to x86_64. And conversely, in my professional domain most of the floating point computations are performed in 32 bits, although we have been using 64 bits platforms and 64 bits OS for decades. Not because we live in the past, but because of huge data volumes and memory requirements.

What Intel is considering to drop in the future is the support of the x86 instruction set, which is still present in the x86_64 CPUs. Certainly not the hardware support of 32 bits floating points.

It’s very easy to make such a statement… As long as one doesn’t think about the consequences. First of all, the standard requires the processor to support a real kind that:

  • has a precision strictly higher than the precision of the default real
  • occupies exactly twice the memory

Under these requirements, the default real can simply NOT be the one with the highest precision. And suppressing this requirement would virtually break tons of existing codes.

Fair enough, to be honest I do not understand the full implications of things like this: Envisioning a Simplified Intel Architecture for the Future As an aside, my work is also almost entirely performed in 32-bit real floating point and integer computations, with double precision used sparingly in places like matrix math accumulators and covariance matrices. The reasons are different, namely being that this is legacy software written in a time when programmers would go to great lengths to save every byte of memory.

To be pedantic, default real kind could still be the largest supported in hardware (real64), and then use real128 from iso_fortran_env for double precision, but as you point out this may very well break many, legacy applications. I like Fortran as a language. I want it to be successful. For the language to remain relavent, it needs to grow and adapt, as well as attract new practitioners and actually provide a reason for starting new projects with it. These types of issues of “default real kind”, default implicit save, and default implicit typing are nothing but annoying gotchas compared to the vast ecosystem of modern computer programming.

@kargl I really appreciate your experience and contribution to these discussions, but could you please address the actual question? What is the benefit of this stance? I provided an example where the same rules will have the opposite effect of what is being discussed here. Namely, the variable is a lower precision than default real kind, resulting in the stored value being as close as possible to what the programmer typed. This is not the case when the variable is a higher precision than default real kind, as is the case for double precision today.

Further, I am a bit confused about what your code example was meant to demonstrate. When I incorporate something like that into a simple program, I don’t see any different between hf and the default real kind (value 4, 32-bits in gfortran).

program main
implicit none

    integer, parameter :: hf = selected_real_kind(p=2, radix=2)

    real(hf) :: xr16 = 1.1_hf
    real :: xdef = 1.1

    write(*,'(a,e13.6,a,i0,a,i0)') 'xr16: ',xr16,' in ',storage_size(xr16),' bits, kind=',kind(xr16)
    write(*,'(a,e13.6,a,i0,a,i0)') 'xdef: ',xdef,' in ',storage_size(xdef),' bits, kind=',kind(xdef)

end program main

outputs:

xr16:  0.110000E+01 in 32 bits, kind=4
xdef:  0.110000E+01 in 32 bits, kind=4

This is in line with my expectations, but I am not sure if you were trying to say hf should somehow behave differently? Or otherwise, what is the purpose of making a declaration of such an integer parameter?

1 Like