Can floating point literals be adapted by the compiler to double precision variable?

This was a case where an ambiguity was removed. It isn’t a case where a specified behavior was changed to a different specified behavior.

I don’t disagree with the sometimes slow progress of fortran. However, I don’t think this case fits that description. The fortran programmer currently has all the tools necessary to specify kinds of the operands and also the kinds of the intermediate results if something other than the default promotion rules are required. In fact, I would say that the f77 optional silent promotions work against the goal of expressing a computation better and safer.

If half-precision reals were supported by the compiler, then they would have a different KIND value, a different storage_size(xr16), a different epsilon(xr16), and so on. In your example code, the compiler does not support half precision, so the selected_real_kind() function returns the smallest precision KIND value that satisfies the request (which happens to be the default KIND).

I think the solution is to have different modes in a compiler. One strictly standards conforming, and that should make one camp happy. The other mode is “no unintuitive surprises”, one can call it “pedantic” or “strict” mode, here are some examples of checks that it should do: Create pedantic/strict mode · Issue #363 · lfortran/lfortran · GitHub. Then as a user you select the mode that you want.

1 Like

That is indeed a solution, and perhaps the most realistic one.

The follow up question then becomes, why isn’t “strict/pedantic” the default for the compiler, in order to prevent hidden/unexpected errors? After that, “why doesn’t the standard just support the collection of rules from strict/pedantic mode?” At that point, we get back to where we are now, with “backwards compatibility” as the valiant last stand reason preventing any modernization of the language as a whole.

I am actually curious, of the supposedly numerous legacy codes still in use today… Are they actually standard conformant? The only ones I am familiar with are decidedly NOT. All stemming from archaic projects started in the 1980s or before, most of the Fortran codes I actually see at work are in fact, reliant on non standard behavior from older versions of Intel’s ifort. This includes things like non standard, default ifort behavior (like requiring -assume realloc_lhs).

1 Like

@certik, I don’t think it’s a question about “camps”.

The way Fortran has evolved, there will always be multiple processors that are crucial for the practitioners.

Under the circumstances, the standard shall remain highly relevant.

The key is for the standard to serve the practitioners, not for the standard to be beholden to certain vendor reps who are entirely divorced from the practitioners and who have their own views and constraints - usually related to said vendor budgets - and some post-retirement gigs to dictate what can and what should not be in the standard and override or effectively ignore the feedback of practitioners turned in via various communication channels including WG5 survey, GitHub J3 proposal site, emails, this discourse, other forums, etc.

Let us see how the work toward Fortran 202Y commences and what does not make it and in what form the worklist that do make it end up. As of now, re: the topic of this thread, the only standard consistent as well as backwardly compatible option is the proposal I linked upthread that can give practitioners to define the KIND of literals and intrinsics via a new DEFAULT statement viz.,

! in a program unit
   integer, parameter :: WP = selected_real_kind( p=NN )
   default real (kind=WP)  !<--  Fortran 202Y proposal: a new statement option
   ..
   real(WP) :: x
   ..
   x = 1.23456789012345  !<-- the literal constant is of KIND defined per the DEFAULT stmt above
   ..

The proof will be in the pudding, what is in the official Fortran 202Y document whenever it comes out.

That argument could also be attributed to not removing all the other language elements that have been removed from the language, based on their risky use.

Promoting precision of constants where it can clearly help should not be banned.

And what is a DOUBLE PRECISION constant for portability ? (this is where this problem all started when going from 60-bit reals to 64-bit reals)

Yes, GFortran does well regarding standards conformance. I would argue the -pedantic option doesn’t go far enough though, I would put more checks in there.

I do not believe that for a single second. I mean, among the reasons why Fortran usage dropped along the years, these ones are at most at the very bottom of the list. Someone who would come to me saying “oh, I don’t want to use Fortran because I have to put “implicit none” at the beginning of every module”, would look like a joke to me as a developer.

Typing “implicit none” in Fortran would be a no-show, but typing endless “include <blabla.h>” lines in C/C++ to do as basic things as I/Os or math operations is perfectly fine? Come on…

And the proposal linked by @FortranFan to be able to locally define the default real kind in a programming unit looks attractive: it could simplify coding (much less real(wp) or _wp everywhere) without breaking the backward compatibility at all.

1 Like

This seems like an odd choice of terminology. To me, terms like “pedantic” and “strict” mean exactly the same thing as “strictly standards conforming”. What is being discussed here is more or less the opposite, where the compiler does things in opposition to strict standards conformance that try to do what a beginner programmer might want to occur but where some incorrect syntax has been used.

Regarding the optional silent promotion in data statements, I previously asked:

As I have stated previously, I avoided this silent promotion feature in f77, so I do not know how extensively it was implemented. Was it also extended to other situations and to other data types? Or was it limited to that specific case (double precision variable && real literal constant && in a data statement). Now that fortran supports an arbitrary number of real KINDs (rather than just two real types as in f77), how would such a feature even be generalized for modern fortran in a consistent way? If it cannot be generalized in a consistent way (to other KINDS, perhaps including decimal floating point), then shouldn’t that alone be enough to disqualify it from any further consideration?

One further comment about f77. I stated above that f77 supported only two floating point types, real and double precision. Actually, there was a formal f77 subset that supported only one real type. And yes, I used at least one compiler (maybe even more than one) that conformed to that particular subset feature. If I remember correctly, it would map both real and double precision (and also real*4 and real*8) to its only supported real type, which was a 64-bit floating point. Later versions of that compiler (Floating Point Systems Fortran), did eventually support the full standard through software emulation (a 128-bit floating point), but it was so slow that one avoided it unless absolutely necessary. This was similar to the way CRAY supported double precision, it existed, but it was seldom used in practice because it was so slow. Another thing I remember about the FPS compiler is that, like many other fortran vendors, there were two fortran manuals, one that described the language and a second one that described the compiler options and features. For that first language manual, FPS simply distributed the ANSI F77 manual. They didn’t relabel it, or put company logos on it or anything, it was the straight ANSI F77 manual. The second manual included also language extensions. One extension I remember using was asynchronous i/o. It took standard fortran another 20+ years to add that feature to the language.

2 posts were merged into an existing topic: Simple Generics

This promotion was mainly associated with the PC hardware + 8087 registers, where all real calculations were performed in an 80-bit register. I do know that Lahey compilers promoted constants to 80 bit reals, and for 4-byte or 8-byte real calculations, you had the option of retaining the calculation value in the 80-bit register, or you could explicitly define temporary reals as real10 to guarantee precision was maintained. (With F77, it was difficult to explicitly define 80-bit real constants so the compiler helped with this)
There was also a compile option to not retain the accumulated computation in the register, but force it to the precision specified in the Fortran code. However, if you were wanting to target precision but no performance disadvantage, you could either hope the machine instructions maintained the value in the register, or define real
10 values for the calculation. I always opted for targeting precision.

This became a problem when SIMD (MMX?) instructions became available and there was then a difficult choice between precision or performance, only exaserbated with SSE then AVX made performance a more dominant issue.

I have produced a simple example that clearly shows the change in error when calculating total time for 1,000,000 time steps of 0.001 seconds.
case 1: uses a real constant of 0.001 and gets an accumulated error of 4.57e-5
case 2 : uses a real constant of 0.001d0 and gets an accumulated error of 1.67e-8
case 3 : uses an 80-bit accumulator and gets an accumulated error of 9.09e-13
case 4 : uses an 80-bit accumulator and constant ( as f77/8087)

Although there are other precision problems with many very small time steps (where first and second differences quickly loose precision), the then errors of unexpected constants and real*8 accumulators was a real issue at the time.
This also appeared when converting from F77 to F90 compilers, where the accuracy of benchmark analyses deteriorated, again questioning the reliability of the new “buggy” F90 compilers that had reduced computational accuracy.

It was a loss of confidence in the new Fortran.
I don’t know why high precision accumulators have since been ignored as a hardware option, but unfortunately, Fortran users are not the main decision makers !
(making many core PC’s with inadequate memory bandwidth is another recent example)

      integer, parameter :: dp = kind(1.0d0)
      integer :: i, num_step = 1000000
      real (dp) :: x, time_step = 0.001, end_time, time_step8 = .001_dp
      real*10   :: x10, time_step10

      x = 0
      do i = 1,num_step
        x = x + time_step
      end do
      end_time = x
      write (*,*) num_step, end_time-1000, time_step, '  time_step = 0.001'

      x = 0
      do i = 1,num_step
        x = x + time_step8
      end do
      end_time = x
      write (*,*) num_step, end_time-1000, time_step8, '  time_step = 0.001_dp'

      x10 = 0
      do i = 1,num_step
        x10 = x10 + time_step8
      end do
      end_time = x10
      write (*,*) num_step, end_time-1000, time_step8, '  real*10 time'

      x10 = 0
      time_step10 = 1.
      time_step10 = time_step10 / 1000
      do i = 1,num_step
        x10 = x10 + time_step10
      end do
      end_time = x10
      write (*,*) num_step, end_time-1000, time_step10, '  time_step = 0.001_10'
      end
        
!      1000000       4.749745130539E-05         1.000000047497E-03    time_step = 0.001
!      1000000      -1.673492988630E-08         1.000000000000E-03    time_step = 0.001_dp
!      1000000       9.094947017729E-13         1.000000000000E-03    real*10 time
!      1000000       7.958078640513E-13    1.00000000000000000E-03    time_step = 0.001_10```

Again, you can’t rely on hidden promotions or hardware features that are not guaranteed by the standard. Wherever you need higher precision, then declare higer precision variables.

This statement applies to modern fortran, with all of its KIND values. It did not apply to f77 which supported either a single precision (in the f77 subset) or two precisions (REAL and DOUBLE PRECISION). Specifically, it was impossible in standard f77 to declare something like a REAL*10 variable for an extended precision accumulation.

I do not agree with the statement:

For me, the silent promotion was just a nuisance when trying to write portable code, so I was glad when the standard removed that ambiguity. It was one less “gotcha” that I had to worry about. The 80-bit register usage was a separate issue, with its own nuisance factor. Plus, with f90, there was now the possibility to declare real variables in a standard way with the various precisions that a programmer might like to use.

One thing that was missing in f90, and is still missing to this day, is a generalization of the intrinsic dprod() function to work with arbitrary KINDs. There are workarounds for that missing functionality, but they don’t really replace exactly what dprod() does.

Sorry Ron, I can not accept that a higher precision accumulator is a nuisance !

When calculating eigen-vectors with an itterative solver, a higher precision accumulator to filter out found eigenvectors significantly improves the itteration times.

Precision is improtant when you use Fortran to calculate results, not just a nuisance.
I think I should now stop repeating the point !

You have just truncated the citation, which was “the silent promotion was just a nuisance when trying to write portable code (and I totally agree with that)

1 Like

Yes, I have posted code here that does use higher precision accumulation, so it is clear that I am not opposed to that technique. It is the silent part that is the problem, and by that I mean that the f77 compiler was allowed to do it, or not, depending on such external things as compiler options. I have been bitten many times by this silent promotion.

@JohnCampbell mentions eigenvalues, which can be (and often are) computed as roots of a polynomial. When iterating on the roots of a polynomial, one might test convergence by comparing two values against a tolerance. With silent promotion, the programmer cannot know if he is comparing 64-bit floating point values or 80-bit values. Should the tolerance be a 64-bit epsilon value or the 80-bit epsilon value? If the compiler makes one of the several wrong silent choices, such as comparing the 64-bit difference to the 80-bit epsilon, then the convergence test is never satisfied. Yes, I’ve actually had this happen to me! And then the frustrating thing is when you compile the code with debug options, the error disappears and the convergence test works correctly!

As far as the language standard is concerned, the important thing is that the language allows the programmer to write the code that he wants. There are two aspects of this, that have been too often conflated in these discussions.

One is the silent promotion of REAL constants to DOUBLE PRECISION in data statements in f77. That is problematic for several reasons: 1) the promotion was optional, a compiler could do it or not, giving the programmer no control (within the standard) over which behavior was implemented; 2) some compilers might do this, while others might not, leading to portability issues; 3) the silent promotion might be done in data statements, but not in other contexts such as within expressions or as actual arguments, leading to inconsistent results of what would appear to the programmer as equivalent situations. This ambiguous behavior was eliminated in f90, which eliminates all of these problems, while still allowing a standard way to initialize constants to the desired precision.

The other issue is the silent use of 80-bit extended precision intermediates. This is a separate issue from the data statement problem, and it can even occur with f90 in which the data statement problem was eliminated. Here, the programmer can be surprised simply because the compiler generates code that is inconsistent with what the programmer wrote. Again, there is no recourse by the programmer (within the standard) to force the compiler to do the right thing. The programmer must rely on compiler options or inline compiler directives to fix the error, and neither of those are acceptable solutions. If the programmer wants to use extended precision, then f90 allows him to do so in a straightforward and standard-conforming way. F90 also provides several other ways to address these kinds of convergence and tolerance issues, such as the nearest() and the spacing() functions. So it is now possible to write portable code that uses extended precision accumulators and does not error because of any silent promotion issues.

This will be of relevance to this discussion:

https://j3-fortran.org/doc/year/23/23-199r1.txt

4 Likes