With gfortran Ver 11.1, there appears to be a different interpretation of large integers vs high precision reals.
If I compile the following program, using gfortran overflow.f90 -o overflow.exe -fno-range-check,
the real constant is still truncated as real(4), while
the integer constant is converted to integer(8)
If I put the real constant in a character string and read as real(8), it retains the precision.
The F90 standard should have retained honoring the precision of constants.
For real constants, you should get what you code, rather than the truncation being a hidden outcome.
I donât recall what happended at F77 with real*8 x ; x = 0.1
Portability is something I always take into account. I typically want my code to be compiled even on old machines which are, of course, 32-bit. Those oldies can still run latest GNU/LInux decently, so I see no reason letting them collect dust in the attic; as long the code does not involve really heavy computations, such a machine can be pretty useful even today. To keep things simple and easily modifiable, I always define a module named âTypesâ, very similar to the one @stavros posted, and pretty much every program/module I write has the use Types on top. Itâs not like it solves all portability issues, but at least it is better than something like kind=8 which may or may not be the same everywhere in the code. Issues still remain though, especially when you have to deal with C interoperability, where there is a standard, but almost all libraries either use ANSI C, or a mixed scheme of C types, and it can easily become a mess, even with integers interoperability.
One thing I learned in Numerical Analysis is that âdouble precisionâ is not the first thing you should try when you get ânot very accurateâ results. In many cases the reason for inaccuracies is the numerical method itself, not the bytes used in numbers. There are of course many cases where local error in computations is alarmingly accumulated because of insufficient precision, so using higher precision is imperative. In that case, a module defining types is much better than kind=8 or whatever similar. Therefore even if you prefer kind=number, you still should use a module defining number.
Well, not so different, IMHO. Without the -fno-range-checkoption, it gives an error, so by default it obviously treats 4000000000 as a 4-byte integer. If one starts using special options, the behavior is changing but this is because of the options. You could add -fdefault-real-8 to the compiler to get
But do you think -fdefault-real-8 and -fno-range-check imply similar actions for real and integer constants.
The big change from F77 to F90 was the way 8-byte reals were initialised.
This was a very frequent misunderstanding and source of errors in time stepping loops, especially where steps such as âdt = 0.1â was used and calculations of the form sin(alpha * dt) was being used and when alpha*dt was expected to equal pi.
Most experienced Fortran users know to have x = 0.1d0, but other language proficient users who try to use Fortran codes, come away seeing Fortran as having many hidden traps and not to be trusted.
No, the latter does not promote the constant to 8-bytes integer (that would require -fdefault-integer-8 option. Instead, it just lets the constant silently overflow. Actually, the manual says:
-fno-range-check
Disable range checking on results of simplification of constant
expressions during compilation. For example, GNU Fortran will give
an error at compile time when simplifying âa = 1. / 0â. With this
option, no error will be given and âaâ will be assigned the value
â+Infinityâ. If an expression evaluates to a value outside of the
relevant range of [â-HUGE()â:âHUGE()â], then the expression will be
replaced by â-Infâ or â+Infâ as appropriate. Similarly, âDATA
i/ZâFFFFFFFFâ/â will result in an integer overflow on most systems,
but with -fno-range-check the value will âwrap aroundâ and âiâ will
be initialized to -1 instead.
Having read that once again and filtering out the FP stuff, I ended up confused a bit, as if the last sentence of the description were applied not only to data but also to variable initialization, then k2 should get negative value which apparently is not the case. So I see some inconsistency here.
Other than that, I only wanted to point out that using special options changes the default behavior so one cannot say âgfortran is doing thisâ or âgfortran is not doing thatâ anymore (without referring to the explicitly changed environment)