Best way to declare a double precision in Fortran?

With gfortran Ver 11.1, there appears to be a different interpretation of large integers vs high precision reals.
If I compile the following program, using gfortran overflow.f90 -o overflow.exe -fno-range-check,
the real constant is still truncated as real(4), while
the integer constant is converted to integer(8)

! overflow.f90
!    integer*8 :: k2 = 2147483648
    integer*8 :: k2 = 4000000000
    real*8    :: pi = 3.14159265358979323
!
    write (*,*) 'k2,pi', k2,pi, 4*atan(1.d0)
    end

k2,pi 4000000000 3.1415927410125732 3.1415926535897931

If I put the real constant in a character string and read as real(8), it retains the precision.

The F90 standard should have retained honoring the precision of constants.
For real constants, you should get what you code, rather than the truncation being a hidden outcome.
I don’t recall what happended at F77 with real*8 x ; x = 0.1

1 Like

Portability is something I always take into account. I typically want my code to be compiled even on old machines which are, of course, 32-bit. Those oldies can still run latest GNU/LInux decently, so I see no reason letting them collect dust in the attic; as long the code does not involve really heavy computations, such a machine can be pretty useful even today. To keep things simple and easily modifiable, I always define a module named “Types”, very similar to the one @stavros posted, and pretty much every program/module I write has the use Types on top. It’s not like it solves all portability issues, but at least it is better than something like kind=8 which may or may not be the same everywhere in the code. Issues still remain though, especially when you have to deal with C interoperability, where there is a standard, but almost all libraries either use ANSI C, or a mixed scheme of C types, and it can easily become a mess, even with integers interoperability.

One thing I learned in Numerical Analysis is that “double precision” is not the first thing you should try when you get “not very accurate” results. In many cases the reason for inaccuracies is the numerical method itself, not the bytes used in numbers. There are of course many cases where local error in computations is alarmingly accumulated because of insufficient precision, so using higher precision is imperative. In that case, a module defining types is much better than kind=8 or whatever similar. Therefore even if you prefer kind=number, you still should use a module defining number.

1 Like

Well, not so different, IMHO. Without the -fno-range-checkoption, it gives an error, so by default it obviously treats 4000000000 as a 4-byte integer. If one starts using special options, the behavior is changing but this is because of the options. You could add -fdefault-real-8 to the compiler to get

k2,pi 4000000000 3.1415926535897931 3.14159265358979323846264338327950280

but this wouldn’t mean that gfortran is treating real constants w/o d0 according to their number of digits

But do you think -fdefault-real-8 and -fno-range-check imply similar actions for real and integer constants.

The big change from F77 to F90 was the way 8-byte reals were initialised.

This was a very frequent misunderstanding and source of errors in time stepping loops, especially where steps such as “dt = 0.1” was used and calculations of the form sin(alpha * dt) was being used and when alpha*dt was expected to equal pi.

Most experienced Fortran users know to have x = 0.1d0, but other language proficient users who try to use Fortran codes, come away seeing Fortran as having many hidden traps and not to be trusted.

No, the latter does not promote the constant to 8-bytes integer (that would require -fdefault-integer-8 option. Instead, it just lets the constant silently overflow. Actually, the manual says:

-fno-range-check
Disable range checking on results of simplification of constant
expressions during compilation. For example, GNU Fortran will give
an error at compile time when simplifying “a = 1. / 0”. With this
option, no error will be given and “a” will be assigned the value
“+Infinity”. If an expression evaluates to a value outside of the
relevant range of [“-HUGE()”:“HUGE()”], then the expression will be
replaced by “-Inf” or “+Inf” as appropriate. Similarly, “DATA
i/Z’FFFFFFFF’/” will result in an integer overflow on most systems,
but with -fno-range-check the value will “wrap around” and “i” will
be initialized to -1 instead.

Having read that once again and filtering out the FP stuff, I ended up confused a bit, as if the last sentence of the description were applied not only to data but also to variable initialization, then k2 should get negative value which apparently is not the case. So I see some inconsistency here.

Other than that, I only wanted to point out that using special options changes the default behavior so one cannot say “gfortran is doing this” or “gfortran is not doing that” anymore (without referring to the explicitly changed environment)

I have to say that I agree with your views on all these parameterised data types. Its not clear at all to me why anyone would prefer that method. I am a long time Fortran programmer who has just come back to the language after a few decades. Fortran 90 is new to me and I wondered where double precision had gone. I tried it and it works fine. I have a lot of F77 code to use which has double precision everywhere and so I’m still old style. I think if your first programming language was Fortran and not Java or C then you approach the language very differently.

I think double precision will be in the language forever, so it can be used without worry. But what if your program is giving wrong answers and you suspect round-off error? Or if you want to test if your program gives effectively the same results at a faster speed with single precision? If you declare your double precision variables as real(kind=dp) with dp defined in a module, you can just set dp to the quadruple precision or single precision kind, recompile, and run your program with that precision. If you have hard-coded double precision everywhere your code is less flexible.

5 Likes

Being a Scientist and Research Engineer I want my answers to be accurate. Making all real variables double precision was considered “Best Practice” where I worked as single precision introduces bigger rounding errors that add up over millions of iterations. I used “Commercial Fortran” for a while and that has the best of both worlds with additional string based data types and number types that are accurate for financial numbers. I assumed that these improvements would have been added into F90 but it appears that they have not. Where did float , byte , bit data types go? The modern manuals I just got are not written in a user friendly way and don’t even specify good examples for read and write which means that the language may have hidden capabilities that the authors are unaware of.

The issue with using DOUBLE PRECISION goes back to the Cray XMP and C90 systems where the default REAL was 64 bits and DOUBLE PRECISION implied 128 bits. Lots of folks (including me) developed code on something like a VAX before moving to a Cray and had to use DOUBLE PRECISION to avoid roundoff errors. Then when I moved to a Cray I had to change the DOUBLE PRECISION back to REAL because doing 128 bit arithmetic slowed things down enough that it wasn’t all that much faster than the VAX. Using the KIND parameters is just a lot more flexible and transportable.

3 Likes

This issue of ambiguity with DOUBLE PRECISION is the reason I adopted "real8" to document precision, where " * n " is a universally recognised description of precision. It is much more concise and clearer description than real(wp).
The other advantage of using real
8, is that if the compiler objects to the syntax, there are/were far more portability issues than just reviewing selected_real_kind when moving to an architecture that does not support 8-byte reals. Since F90 was introduced, most of those hardware issues have disappeared.

1 Like

The two do not exclude each other. If you think the old-style kind specification is the better way to go, you can use:

module kinds
implicit none
private
real*8 :: dummy
integer, parameter, public :: wp = kind(dummy)
end module

The benefit of a kind parameter is it centralizes the precision selection in one place. To change the precision, you don’t need modify all of your source files or rely on a compiler flag. The purpose of flags in build-scripts tends to get lost with time, whereas the project won’t build without the kinds module.

1 Like

This conversation is very interesting as we are planning to add a new rule to the GitHub Open Catalog about this topic, something like “Prefer real(kind=kind_value) for declaring floating types” or “Use REAL instead of DOUBLE PRECISION for declaring floating types”). We are managing several sources of published Fortran best practices available online, such as this and this.

I guess this conversation confirms the interest in such a rule!

1 Like

I have found that “centralise” is also the problem, as you have to find the module kinds.
By locally describing the real using a byte size description of precision, it is a clear and accessable description. Kahan’s IEEE754 documentation uses the byte size description extensively.

DOUBLE PRECISION has always been an ambiguous description, especially without clear documentation of what compiler and hardware it was based. It was a red flag for probably more problems when porting the code.

I think “universally recognised” is probably a bit of an exaggeration, since it is not recognised by the Fortran standard. Also, it is not a description of precision, it is a description of storage size (i.e. number of bytes occupied by a value).

2 Likes

I agree “universally” may be an exageration. Perhaps easily recognised by Fortran users.

My key point is to read the IEEE754 documentation, which is based on storage size.
It describes a precision that is derived from storage size, not a storage size derived from precision, which is Fortran’s kind approach.

The main source of confusion with early FORTRAN real and double precision was because of the many different hardware storage sizes, which were mostly being rationalised by the 80’s. The IEEE754 standard eliminated most of this, but legacy use of REAL and DOUBLE PRECISION were the problems.

Fortran 90 introduced precision, which to me was back-to-front, as with IEEE754, storage size defines precision. Fortran’s approach is precision defining kind, while storage size does not exist !

Unfortunately there is not much discussion of what precision is or the relationship between precision, round-off and computational accuracy. The required precision varies greatly between different computation approaches. In my finite element analysis for plate bending that uses first and second derivatives, this requires a greater precision than for solid modelling that uses only the first derivative. After a while we get to understand the significance of precision, but in my experience this was not sufficiently taught.

Ever since Fortran 2003 the ISO_FORTRAN_ENV module has included
file_storage_size, numeric_storage_size, and character_storage_size. In my Ubuntu 22.04 system they are 8, 32, 8 with these compilers: g95, gfortran, ifx, ifort and AOCC flang.

1 Like

It also includes parameters like REAL32, REAL64, and so on, which is yet another way to specify the precision indirectly through storage unit size (rather than the reverse).

It is a bit confusing; REAL32, REAL64 are Fortran 2008, I find here

ISO_FORTRAN_ENV (The GNU Fortran Compiler)

2 Likes

The storage sizes related to KIND are in bits, not bytes. I just think it is extreme, the lengths that the Fortran Standard has taken to deny the existence of a byte, which is also the standard unit of memory addressing.
We also have no relevant memory address integer kind, which was always a hastle for 32-bit memory addressing.
Both NAG and Silverfrost Fortran adopted kind values that were not byte related and this vagueness with the value of KIND in the standard has introduced complexities related to portability and definition of constants, especially long integers.

Have we discussed the best way to define “double precision” integer and real constants ?

Defining an 8-byte real constant for 0.1 seconds is not
real*8 :: step_size = 0.1

If I want to define 2 gigabytes in Fortran; I don’t write
integer8, parameter :: two_gigabytes = 2 ** 31
but
integer
8, parameter :: two = 2
integer*8, parameter :: two_gigabytes = two ** 31

The errors introduced in these definitions, without a subtle understanding of the Fortran Standard, should be totally unnecessary.

In my code development, where I am moving between Fortran compilers with different KIND values, byte sized definitions are by far the easiest and most reliable.

Even this forum makes it difficult to type byte sized integer and real definitions.

The motto of Fortran is “abstracting away the machine” and on the long term (career or human life) it is not so absurd. Although the length of a byte has stabilized to 8 bits for a very long time now, it is difficult to imagine the computer and microelectronics world in 30 years. Although there is a microelectronics roadmap, surprises may arise.

4 Likes