Best way to declare a double precision in Fortran?

With gfortran Ver 11.1, there appears to be a different interpretation of large integers vs high precision reals.
If I compile the following program, using gfortran overflow.f90 -o overflow.exe -fno-range-check,
the real constant is still truncated as real(4), while
the integer constant is converted to integer(8)

! overflow.f90
!    integer*8 :: k2 = 2147483648
    integer*8 :: k2 = 4000000000
    real*8    :: pi = 3.14159265358979323
!
    write (*,*) 'k2,pi', k2,pi, 4*atan(1.d0)
    end

k2,pi 4000000000 3.1415927410125732 3.1415926535897931

If I put the real constant in a character string and read as real(8), it retains the precision.

The F90 standard should have retained honoring the precision of constants.
For real constants, you should get what you code, rather than the truncation being a hidden outcome.
I don’t recall what happended at F77 with real*8 x ; x = 0.1

1 Like

Portability is something I always take into account. I typically want my code to be compiled even on old machines which are, of course, 32-bit. Those oldies can still run latest GNU/LInux decently, so I see no reason letting them collect dust in the attic; as long the code does not involve really heavy computations, such a machine can be pretty useful even today. To keep things simple and easily modifiable, I always define a module named “Types”, very similar to the one @stavros posted, and pretty much every program/module I write has the use Types on top. It’s not like it solves all portability issues, but at least it is better than something like kind=8 which may or may not be the same everywhere in the code. Issues still remain though, especially when you have to deal with C interoperability, where there is a standard, but almost all libraries either use ANSI C, or a mixed scheme of C types, and it can easily become a mess, even with integers interoperability.

One thing I learned in Numerical Analysis is that “double precision” is not the first thing you should try when you get “not very accurate” results. In many cases the reason for inaccuracies is the numerical method itself, not the bytes used in numbers. There are of course many cases where local error in computations is alarmingly accumulated because of insufficient precision, so using higher precision is imperative. In that case, a module defining types is much better than kind=8 or whatever similar. Therefore even if you prefer kind=number, you still should use a module defining number.

1 Like

Well, not so different, IMHO. Without the -fno-range-checkoption, it gives an error, so by default it obviously treats 4000000000 as a 4-byte integer. If one starts using special options, the behavior is changing but this is because of the options. You could add -fdefault-real-8 to the compiler to get

k2,pi 4000000000 3.1415926535897931 3.14159265358979323846264338327950280

but this wouldn’t mean that gfortran is treating real constants w/o d0 according to their number of digits

But do you think -fdefault-real-8 and -fno-range-check imply similar actions for real and integer constants.

The big change from F77 to F90 was the way 8-byte reals were initialised.

This was a very frequent misunderstanding and source of errors in time stepping loops, especially where steps such as “dt = 0.1” was used and calculations of the form sin(alpha * dt) was being used and when alpha*dt was expected to equal pi.

Most experienced Fortran users know to have x = 0.1d0, but other language proficient users who try to use Fortran codes, come away seeing Fortran as having many hidden traps and not to be trusted.

No, the latter does not promote the constant to 8-bytes integer (that would require -fdefault-integer-8 option. Instead, it just lets the constant silently overflow. Actually, the manual says:

-fno-range-check
Disable range checking on results of simplification of constant
expressions during compilation. For example, GNU Fortran will give
an error at compile time when simplifying “a = 1. / 0”. With this
option, no error will be given and “a” will be assigned the value
“+Infinity”. If an expression evaluates to a value outside of the
relevant range of [“-HUGE()”:“HUGE()”], then the expression will be
replaced by “-Inf” or “+Inf” as appropriate. Similarly, “DATA
i/Z’FFFFFFFF’/” will result in an integer overflow on most systems,
but with -fno-range-check the value will “wrap around” and “i” will
be initialized to -1 instead.

Having read that once again and filtering out the FP stuff, I ended up confused a bit, as if the last sentence of the description were applied not only to data but also to variable initialization, then k2 should get negative value which apparently is not the case. So I see some inconsistency here.

Other than that, I only wanted to point out that using special options changes the default behavior so one cannot say “gfortran is doing this” or “gfortran is not doing that” anymore (without referring to the explicitly changed environment)