With gfortran Ver 11.1, there appears to be a different interpretation of large integers vs high precision reals.
If I compile the following program, using gfortran overflow.f90 -o overflow.exe -fno-range-check,
the real constant is still truncated as real(4), while
the integer constant is converted to integer(8)
If I put the real constant in a character string and read as real(8), it retains the precision.
The F90 standard should have retained honoring the precision of constants.
For real constants, you should get what you code, rather than the truncation being a hidden outcome.
I donât recall what happended at F77 with real*8 x ; x = 0.1
Portability is something I always take into account. I typically want my code to be compiled even on old machines which are, of course, 32-bit. Those oldies can still run latest GNU/LInux decently, so I see no reason letting them collect dust in the attic; as long the code does not involve really heavy computations, such a machine can be pretty useful even today. To keep things simple and easily modifiable, I always define a module named âTypesâ, very similar to the one @stavros posted, and pretty much every program/module I write has the use Types on top. Itâs not like it solves all portability issues, but at least it is better than something like kind=8 which may or may not be the same everywhere in the code. Issues still remain though, especially when you have to deal with C interoperability, where there is a standard, but almost all libraries either use ANSI C, or a mixed scheme of C types, and it can easily become a mess, even with integers interoperability.
One thing I learned in Numerical Analysis is that âdouble precisionâ is not the first thing you should try when you get ânot very accurateâ results. In many cases the reason for inaccuracies is the numerical method itself, not the bytes used in numbers. There are of course many cases where local error in computations is alarmingly accumulated because of insufficient precision, so using higher precision is imperative. In that case, a module defining types is much better than kind=8 or whatever similar. Therefore even if you prefer kind=number, you still should use a module defining number.
Well, not so different, IMHO. Without the -fno-range-checkoption, it gives an error, so by default it obviously treats 4000000000 as a 4-byte integer. If one starts using special options, the behavior is changing but this is because of the options. You could add -fdefault-real-8 to the compiler to get
But do you think -fdefault-real-8 and -fno-range-check imply similar actions for real and integer constants.
The big change from F77 to F90 was the way 8-byte reals were initialised.
This was a very frequent misunderstanding and source of errors in time stepping loops, especially where steps such as âdt = 0.1â was used and calculations of the form sin(alpha * dt) was being used and when alpha*dt was expected to equal pi.
Most experienced Fortran users know to have x = 0.1d0, but other language proficient users who try to use Fortran codes, come away seeing Fortran as having many hidden traps and not to be trusted.
No, the latter does not promote the constant to 8-bytes integer (that would require -fdefault-integer-8 option. Instead, it just lets the constant silently overflow. Actually, the manual says:
-fno-range-check
Disable range checking on results of simplification of constant
expressions during compilation. For example, GNU Fortran will give
an error at compile time when simplifying âa = 1. / 0â. With this
option, no error will be given and âaâ will be assigned the value
â+Infinityâ. If an expression evaluates to a value outside of the
relevant range of [â-HUGE()â:âHUGE()â], then the expression will be
replaced by â-Infâ or â+Infâ as appropriate. Similarly, âDATA
i/ZâFFFFFFFFâ/â will result in an integer overflow on most systems,
but with -fno-range-check the value will âwrap aroundâ and âiâ will
be initialized to -1 instead.
Having read that once again and filtering out the FP stuff, I ended up confused a bit, as if the last sentence of the description were applied not only to data but also to variable initialization, then k2 should get negative value which apparently is not the case. So I see some inconsistency here.
Other than that, I only wanted to point out that using special options changes the default behavior so one cannot say âgfortran is doing thisâ or âgfortran is not doing thatâ anymore (without referring to the explicitly changed environment)
I have to say that I agree with your views on all these parameterised data types. Its not clear at all to me why anyone would prefer that method. I am a long time Fortran programmer who has just come back to the language after a few decades. Fortran 90 is new to me and I wondered where double precision had gone. I tried it and it works fine. I have a lot of F77 code to use which has double precision everywhere and so Iâm still old style. I think if your first programming language was Fortran and not Java or C then you approach the language very differently.
I think double precision will be in the language forever, so it can be used without worry. But what if your program is giving wrong answers and you suspect round-off error? Or if you want to test if your program gives effectively the same results at a faster speed with single precision? If you declare your double precision variables as real(kind=dp) with dp defined in a module, you can just set dp to the quadruple precision or single precision kind, recompile, and run your program with that precision. If you have hard-coded double precision everywhere your code is less flexible.
Being a Scientist and Research Engineer I want my answers to be accurate. Making all real variables double precision was considered âBest Practiceâ where I worked as single precision introduces bigger rounding errors that add up over millions of iterations. I used âCommercial Fortranâ for a while and that has the best of both worlds with additional string based data types and number types that are accurate for financial numbers. I assumed that these improvements would have been added into F90 but it appears that they have not. Where did float , byte , bit data types go? The modern manuals I just got are not written in a user friendly way and donât even specify good examples for read and write which means that the language may have hidden capabilities that the authors are unaware of.
The issue with using DOUBLE PRECISION goes back to the Cray XMP and C90 systems where the default REAL was 64 bits and DOUBLE PRECISION implied 128 bits. Lots of folks (including me) developed code on something like a VAX before moving to a Cray and had to use DOUBLE PRECISION to avoid roundoff errors. Then when I moved to a Cray I had to change the DOUBLE PRECISION back to REAL because doing 128 bit arithmetic slowed things down enough that it wasnât all that much faster than the VAX. Using the KIND parameters is just a lot more flexible and transportable.
This issue of ambiguity with DOUBLE PRECISION is the reason I adopted "real8" to document precision, where " * n " is a universally recognised description of precision. It is much more concise and clearer description than real(wp).
The other advantage of using real8, is that if the compiler objects to the syntax, there are/were far more portability issues than just reviewing selected_real_kind when moving to an architecture that does not support 8-byte reals. Since F90 was introduced, most of those hardware issues have disappeared.
The two do not exclude each other. If you think the old-style kind specification is the better way to go, you can use:
module kinds
implicit none
private
real*8 :: dummy
integer, parameter, public :: wp = kind(dummy)
end module
The benefit of a kind parameter is it centralizes the precision selection in one place. To change the precision, you donât need modify all of your source files or rely on a compiler flag. The purpose of flags in build-scripts tends to get lost with time, whereas the project wonât build without the kinds module.
This conversation is very interesting as we are planning to add a new rule to the GitHub Open Catalog about this topic, something like âPrefer real(kind=kind_value) for declaring floating typesâ or âUse REAL instead of DOUBLE PRECISION for declaring floating typesâ). We are managing several sources of published Fortran best practices available online, such as this and this.
I guess this conversation confirms the interest in such a rule!
I have found that âcentraliseâ is also the problem, as you have to find the module kinds.
By locally describing the real using a byte size description of precision, it is a clear and accessable description. Kahanâs IEEE754 documentation uses the byte size description extensively.
DOUBLE PRECISION has always been an ambiguous description, especially without clear documentation of what compiler and hardware it was based. It was a red flag for probably more problems when porting the code.
I think âuniversally recognisedâ is probably a bit of an exaggeration, since it is not recognised by the Fortran standard. Also, it is not a description of precision, it is a description of storage size (i.e. number of bytes occupied by a value).
I agree âuniversallyâ may be an exageration. Perhaps easily recognised by Fortran users.
My key point is to read the IEEE754 documentation, which is based on storage size.
It describes a precision that is derived from storage size, not a storage size derived from precision, which is Fortranâs kind approach.
The main source of confusion with early FORTRAN real and double precision was because of the many different hardware storage sizes, which were mostly being rationalised by the 80âs. The IEEE754 standard eliminated most of this, but legacy use of REAL and DOUBLE PRECISION were the problems.
Fortran 90 introduced precision, which to me was back-to-front, as with IEEE754, storage size defines precision. Fortranâs approach is precision defining kind, while storage size does not exist !
Unfortunately there is not much discussion of what precision is or the relationship between precision, round-off and computational accuracy. The required precision varies greatly between different computation approaches. In my finite element analysis for plate bending that uses first and second derivatives, this requires a greater precision than for solid modelling that uses only the first derivative. After a while we get to understand the significance of precision, but in my experience this was not sufficiently taught.
Ever since Fortran 2003 the ISO_FORTRAN_ENV module has included
file_storage_size, numeric_storage_size, and character_storage_size. In my Ubuntu 22.04 system they are 8, 32, 8 with these compilers: g95, gfortran, ifx, ifort and AOCC flang.
It also includes parameters like REAL32, REAL64, and so on, which is yet another way to specify the precision indirectly through storage unit size (rather than the reverse).
The storage sizes related to KIND are in bits, not bytes. I just think it is extreme, the lengths that the Fortran Standard has taken to deny the existence of a byte, which is also the standard unit of memory addressing.
We also have no relevant memory address integer kind, which was always a hastle for 32-bit memory addressing.
Both NAG and Silverfrost Fortran adopted kind values that were not byte related and this vagueness with the value of KIND in the standard has introduced complexities related to portability and definition of constants, especially long integers.
Have we discussed the best way to define âdouble precisionâ integer and real constants ?
Defining an 8-byte real constant for 0.1 seconds is not
real*8 :: step_size = 0.1
If I want to define 2 gigabytes in Fortran; I donât write
integer8, parameter :: two_gigabytes = 2 ** 31
but
integer8, parameter :: two = 2
integer*8, parameter :: two_gigabytes = two ** 31
The errors introduced in these definitions, without a subtle understanding of the Fortran Standard, should be totally unnecessary.
In my code development, where I am moving between Fortran compilers with different KIND values, byte sized definitions are by far the easiest and most reliable.
Even this forum makes it difficult to type byte sized integer and real definitions.
The motto of Fortran is âabstracting away the machineâ and on the long term (career or human life) it is not so absurd. Although the length of a byte has stabilized to 8 bits for a very long time now, it is difficult to imagine the computer and microelectronics world in 30 years. Although there is a microelectronics roadmap, surprises may arise.