It’s been proposed here: https://github.com/j3-fortran/fortran_proposals/issues/83, go ahead and one-up the proposal if you like it.
The two options that I can imagine is that one can either do it in the source code itself, or in fpm.toml
.
It’s been proposed here: https://github.com/j3-fortran/fortran_proposals/issues/83, go ahead and one-up the proposal if you like it.
The two options that I can imagine is that one can either do it in the source code itself, or in fpm.toml
.
The key for me is programmatically. Relying on anything external to the language is a complete failure of any “standard” effort. Thanks for the link though. I will check it out and voice my support.
The kind
system is basically an excellent idea. You can write a full module with a given kind and then just have to change a single line if you feel that you need more or less precision for instance.
! just change this line to update thousands of lines
integer, parameter :: wp = selected_real_kind(p=10)
real(wp) :: foo
complex(wp), allocatable :: bar(:,:)
...
Just, the selected_xxx_kind()
mechanism alone was not fully satisfactory at the beginning. The iso_fortran_env
module improved a lot the situation, and all of this will make full sense with the generic programming (hopefully in a near future).
My understanding was that kinds were introduced in Fortran 90. It is hardly reasonable to call it “an excellent system” when 30 years later the supposed reason for it being so awesome (generic programming) does not even exist yet.
If the goal is to change precision program wide with a single line, there are command line recursive find replace commands you could run or preprocessor macros could have served the same purpose as ‘integer, parameter :: wp=selected_real_kind(10)’.
The problem is that different “kinds” of “the same data type” simply are not the same data type. However, to work backwards compatibility into this model, decisions like having a “default real kind” had to be made.
This would not have been an issue if kinds were made mandatory. Free form source code? Variable declarations require a kind. This problem we are discussing now would not exist.
Generic programming was not the reason why the kind system was introduced. What I am saying is that the kind system will make even more sense with generic programming.
How can you seriously compare changing a single line in a whole code, and relying on (error prone) find-replace utilties or tedious preprocessor directives? By the way, preprocessor directives were the only way at times where one were mixing machines with 32 bits or 64 bits default reals: there were “#ifdef WORD64” or “#ifdef WORD32” everywhere in the codes, it was just a pain.
Why is it a problem ?
You have already explained why this approach is a bad idea when you said,
@RonShepard and @PierU ,
The Lahey compilers I used on PC prior to F90, utilised 80 bit registers to provide enhanced precision. Real constants were stored as 80 bit constants and provided better precision that available in standard complying F90 compilers.
As we have seen in the SUM thread, an 80 bit accumulator is a definate advantage for precision.
Precision was degraded by both the F90 standard and the move from 80 bit accumulators to 64 bit simd instructions.
Achieved 64 bit real calculation precision has been degraded by both of these changes
Perhaps I should have said 8087 pc compilers, rather than 32 bit compilers.
The following code example shows two common error sources for my numerical computation, the first where the compiler should recognise an extended precision constant and the second where a typical time step can lead to unexpected results. For both of these, the Lahey compiler provided improved precision.
real*8 :: one_pi = 3.1415926535897932
real*8 :: time_step = 0.1
write (*,*) one_pi, time_step
write (*,*) 4*atan(1.0d0)
end
As a fortran user, if you had the choice that real constants were by default created/stored to a higher precision, what would you choose ?
Assuming you are correct, which is still doubtful to me, it was then a non-standard behavior of this specific compiler. As shown by the link I have given, the F77 standard was describing floating point constants of the form “5.35” to be interpreted as the default real type. At the very least it was not requiring at all these constants to be promoted to higher precision:
4.4 Real_Type
…
4.4.1 Basic_Real_Constant. The form of a basic real
constant is an optional sign, an integer part, a
decimal point, and a fractional part, in that order.
Non standard behavior also means non-portable : a code relying on the fact that constants are implicitly promoted to higher precision would have broken with a standard-compliant compiler that would not promote the constants.
And why is it doubtful to me: implicitly promoting the real constants would be a problem for constants appearing as actual arguments:
call foo(5.35)
subroutine foo(x)
real :: x
...
end subroutine
If 5.35
is promoted to anything else than the default real, the code is invalid (and would have been invalid in F77 too). So the compiler had to differentiate between cases, depending on where the constants were appearing. Possible in theory, but unlikely.
And BTW this has little to do with registers. When compiling a code, the compiler does not store constants in registers.
It’s not about my preference, but about the standard. You cannot assume that a real constant of the form “5.35” is anything else than a default real, and it was true also before F90. If some compilers used to behave differently, fine, but it was non-standard thus non portable.
Maybe I am improperly placing blame on the kind system. yet, I still cannot see how it is viewed as a positive thing that ‘double precision :: x = 1.2345678901234’ is going to lose precision at compile time, standard or not. The Fortran standard saying something is one way or another does not inherently make that way the best, ideal, or even a good idea at all. It only describes how one should expect a standard conforming compiler to operate.
It also sounds to me like compilers in the past implemented clever solutions to at least some of the perceived problems of the time, but now because “that’s not in the standard,” people seem to turn their brains off and be unable to see past that fact.
Sorry, but you are missing the point, here: sticking to the standard is not about fetishism or about claiming that the standard is perfect, it is to ensure portability as much as possible.
You may find a compiler that in double precision :: x = 1.2345678901234
interprets the constant as a double precision one, but if it happens that your code is compiled with another compiler that behaves differently, then your whole code may give wrong results, and the worst thing is that you won’t even know why.
That last part has always been important to me. When a compiler silently does such things as type/kind promotion behind your back, in an undocumented and nonportable way, it causes more problems than it solves. If you want an extended precision value, then with modern fortran it is simple enough for the programmer to specify that precision. real(wp) :: x = 1.2345678901234_wp
. And yes, to another previous question, sometimes you do want to assign a default real constant value to an extended precision variable, so when the compiler silently promotes it, it is doing something that is unwanted, and then you must expend effort getting it to do what you told it to do in the first place.
The silent promotion of variables to 80-bit registers also causes various problems, but that is a separate issue. How many dozens of times have you had an iterative algorithm fail to detect convergence correctly because it was comparing 80-bit register differences rather than actual 64-bit value differences? Then you turn on debugging options to look for the bug, and it goes away. I’m not arguing against using 80-bit values or extended precision in general, the problem is when a compiler tries to take over and it doesn’t get it right.
My earlier comment was about how much better the modern fortran type/kind system is than the old f77 approach. With the type/kind system, the programmer in principle has full control over all of the precisions of all the variables and also all of the intermediates (e.g. using real(x,kind)
explicitly where necessary). Of course, we are using a high-level language, so we eventually must assume that the compiler is doing the right things with our code, but things are much better now than with f77 when all we had was whether to use an E or a D exponent. Now we can program, in a portable way, using a wide variety of KIND values. Looking forward to the future, the fortran type/kind system is all set and in-place for half-precision reals, decimal reals, and 256-bit reals. I don’t know of any language better positioned for these possibilities than fortran.
To quote from Lahey F95 Language reference (Fortran 77 Compatibility)
“Fortran 77 permitted a processor to supply more precision derived from a REAL constant than can be contained in a REAL datum when the constant is used to initialize a DOUBLE PRECISION data object in a DATA statement. Fortran 90 does not permit this option.”
The F90 change did not allow the compiler to provide enhanced precision where the compiler identified it would be of benefit to the calculation. It is “unfortunate” that this assistance has been excluded by the Fortran Standard.
My question is how is this different from the other benefits of a smart Fortran compiler, such as recently discussed, the benefit of a higher precision accumulator for the SUM intrinsic.
The justification for this rule also goes against any benefits that could be derived from a smarter Fortran compiler.
The f90 revision prevented a compiler from doing this behind your back. You can still write the code to do this, the same as before, but you need to specify the appropriate conversions and the type/kind for the accumulation. There have been a couple dozen examples of extended precision accumulation presented here recently in other threads, all of them entirely consistent with modern fortran.
OK, so it looks like you were right and that this compiler was promoting the real constants is some specific contexts.
However, the important word is “permitted”. Not “required”. A user develops a code with the Lahey compiler, everything looks correct, then you port it to another platform with another compiler, or even the same platform with another compiler, and… the results significantly differ (and are wrong). Good luck to understand why.
BTW, I’m curious about the part of the F90 standard that forbids this option (which is better than saying nothing about that IMO)
That’s actually a good question. If you are writing an algorithm using the SUM intrinsic, with a compiler that has a smart implementation of SUM, you could have surprises with another compiler that has a “less smart” implementation. My personal opinion is “In critical parts of an algorithm, avoid using intrinsics that are not guaranteed to give the desired precision”. I can use the DOT_PRODUCT intrinsic for a few hundreds of elements, but for a dot product on 10**9 elements I write my own routine (with several levels of accumulation).
The idea of a programming language having intrinsic functions that provides undocumented, inconsistently incorrect results is not great. Personally I don’t see why anyone would argue that this is acceptable. Language intrinsic functions should probably strive for accuracy above all, likely with pretty bad performance for more simple, smaller cases. Or, if bad performance is not acceptable, then these types of things should not be provided at all. I would argue it is much better to provide nothing than provide tools that will give incorrect results in a compiler dependent way.
These language specs are fairly specific. The f77 statement applies specifically to DOUBLE PRECISION entities in DATA statements. As a practical matter, did compilers extend this silent promotion of literal constants to other contexts, such as expressions or parameter statements. What about f77 extensions to real*10
, real*16
, or complex*32
entities, were those promoted silently too?
As I said before, I avoided using this feature because I knew it was not portable. The portable way to do this in f77 was to append a D exponent to the constant, or to replace the existing E exponent with a D. It also always just “felt” like it was the wrong way to write programs. When one says 1.23456789012345678E0, it seems unnatural to me to promote “as if” I had appended D0 instead. And if such a promotion had occurred in some contexts, such as an actual subroutine argument, then havoc would have ensued. That means the type and value of that constant would have depended on its context, something that would certainly confuse programmers, new and experienced alike.
On the other hand, fortran i/o has always ignored the E and D exponents, and both the input and output precision is determined from the type/kind of the list entity. So in this case, reading 1.23456789012345678E0 into a DOUBLE PRECISION variable results in the full precision despite the E0 exponent. And now with the KIND structure, it would result in the full precision of the list entity regardless of the KIND value. In this case, it is understandable because the internal WP (or whatever is the name) parameter constants used by the programmer are not available to the i/o library. in contrast, the WP parameter constants are available in all expression, initialization, argument, and data contexts.
The basis of my posts is what should we expect from a smarter Fortran compiler.
My hope is that there is somewhere to grow, ignoring the constraints that Fortran 90, page 3 implies, something I have always considered a negative for users.
Tell that to all the Fortran users who have produced less accurate results by using data statements in F90+, such as I have shown above. I have seen these results in programs converted to F90+; an outcome which is totally unnecessary. This is a distinct change from most earlier FORTRAN compilers that tried to assist the user. It is a totally unnecessary destruction of precision. Not all Fortran users read or expect the fine print.
So much for a smart optimising Fortran compiler to assist users, or do you enjoy the gotcha moment !
That’s the price to pay when relying on features that are not guaranteed by the standard, just because one wants to spare appending D0
to a constant. From the moment I started programming in Fortran I have always been taught and told that literal constants without D
were interpreted as default reals.