I hope my criticism of the gfortran documentation is taken in a constructive way. Other compilers document their KIND values in a similar way. Here is the first line of the table from the Intel documentation of the SIN function:
SIN REAL(4) REAL(4)
When a new fortran programmer reads that documentation, it is easy to see that he might think it is good programming practice to use those literal values when declaring variables or when specifying constants.
However, now that you mention ISO_FORTRAN_ENV, would it improve the documentation to use those integer parameters rather than the literal integer values everywhere. For example, if the above row of the table were changed to
SIN REAL(REAL32) REAL(REAL32)
would that improve the situation, or would it just make everything more verbose?
I have noticed that the NAG compiler documents those constants not with their literal values, but indirectly with SELECTED_REAL_KIND(). That is consistent with their compiler options that result in different KIND values being returned for those functions. That convention has the nice side benefit that it does not encourage a programmer to use a literal integer value for KINDs in his code.
It was not possible to use both real kinds in a single fortran subroutine on the VAX. The fp format was selected with a compiler option. Presumably if the VAX had ever supported an f90 compiler, that would have opened up the architecture. Ironically, DEC fought against f8x, and eventually f90, tooth and nail, despite the fact that the language would have ideally supported their hardware.
Regarding IEEE arithmetic, the standard was published in 1985, but that is not when everyone switched. There were new machines being built at that time that still did not use IEEE fp format. New VAX systems were designed and sold up until about 1990, all using the same fp formats (32-bit, two different 64-bit, and 128-bit). To give another example, the Cray 2 was first built in 1985 using its own fp format, and it was the fastest computer in the world at that time. The Cray Y-MP followed in 1988 and the Cray C90 followed in 1991, all with the Cray fp format. I think they sold C90s up until 1996. Of course, both DEC and Cray were also selling other machines that did use IEEE arithmetic in the 1990s, but I’m just pointing out that other fp formats were still common well after f90 and even f95 were introduced.
I am the original poster. I hope some people are still interested in this. I am doing a calculation of the behaviors of a Ising spin array. When I originally posted, I was storing each individual spin with a logical. It was suggested that I write the program with storing spins as bits in 32 bit integers. Well, I finally did it. It took about a painful week. The result is that storing the spins as bits speeds the calculation by 7%. Not really worth the trouble.
ps. I used the commands mvbits, IBITS, popcnt, ieor, ibset, not. Maybe there are fancier ways of working with bits. I dont’ know.
Hello Ron, I’m going away until the new year, but I’ll send you some stuff after that. However, here is the gist, as I understand it. Suppose I store 32 bits in a integer. Now suppose that during the computation I need to access two bits which are conveniently located in the same integer. Then that is faster than looking at two logicals because the computer can keep the same integer in its CPU, whereas with logicals it would need to load two different logicals. Unfortunately, most of the time, when my program needs to look at two bits, they are not conveniently sitting in the same integer. So the program needs to go out and get two different integers, which is no faster than looking for two different logicals. I’m not a computer person, so I hope that makes sense.
I faced a similar problem months ago and spent a day using all bits instead of partial usage of 32/64-bit chunks. The end result was effectively slightly slower.