For the A ≥ HUGE(1)
case, can this be interpreted to mean that INT(A)
is allowed to return HUGE(1)? That is the largest representable integer, although not the largest mathematical integer.
All this discussion about overflow of generic intrinsics is just “fiddling as Rome burns”
This extension in the use of generic intrinsics in the Fortran Standard is not fit for purpose for numerical calculations.
You should not have to include kind in a generic intrinsic.
The approach in the standard should be changed so that it does work in a simpler way.
As for size(array) returning a negative value in a 64-bit compiler, this is just stupid. The mistake is the standard, not the Fortran user.
My silence is because I am trying to isolate the issue. That would allow me to post up some code and values. However, I am encountering problem after problem again.
As soon as I can replicate the issue again, I will let you know.
I looked up the wording in the F2023 draft, and it matches that previously posted for F2018. Can you point me to the place in the F2023 draft where the invalid INT(A)
cases are specified?
The KIND that we are talking about is for the output value, not for the input values. If the KIND argument were removed from the INT() intrinsic (or from many other intrinsic functions), then the only way for a programmer to specify the KIND of the output would be to have different generics for each possible output kind. That alternative does not seem as portable or as open ended as the current situation where KIND is an argument.
Having said that, I would also point out that it is not possible for a programmer to write a function in which the output KIND depends on the value of an input argument. Many fortran intrinsic procedures work that way, so it is obviously a useful thing for a programmer to be able to do, but fortran does not extend that capability to the users of the language.
I see this sentence in the f2023 draft:
A program shall not invoke an intrinsic procedure under circumstances
where a value to be assigned to a subroutine argument or returned as
a function result is not representable by objects of the specified type and
type parameters.
However, that does not answer my previous question because HUGE(1)
is certainly representable. My question is whether HUGE(1)
was the correct value to return for INT(A)
. It does appear to qualify as the correct return value with the wording:
`INT(A)` is the integer whose magnitude is the largest integer that does not
exceed the magnitude of `A` and whose sign is the same as the sign of `A`.
My question is really what does the term integer mean in that sentence. Is it a representable integer (i.e. within the model for its KIND) or is it an abstract mathematical integer? If it is the former, then HUGE(1) is indeed the largest representable integer that does not exceed the magnitude of A. There is, after all, no larger representable integer.
What you ask is worth an interpretation request with J3. Per your description, the standard does appear to lean more toward the following in its spirit even if the verbiage might be falling a tad short:
..
elemental function int( x ) result(r)
! an intended specific instance of `INT` returning default integer for a particular REAL KIND of A
real(DP), intent(in) :: x
integer :: r
if ( abs(x) > huge(r) ) then
r = huge(r)
if ( x < 0.0_dp ) r = -r
else
block
intrinsic :: int
r = int( x )
end block
end if
end function
This is the first part of the sentence in the F2023 draft describing INT(A) when the argument A is real. The rest of the sentence describes those two cases, |A|<1 and |A|≥1.
Note that it does not separate out three cases, |A|<1, 1≤|A|≤HUGE(1), and |A|>HUGE(1). It seems reasonable to think that if the standard committee wanted that last case to return an error, they could have specified that, or if they wanted the domain of the argument to be limited, they could have specified that limit.
To give some examples, in the description of SQRT(X), the text says, “If X is real, its value shall be greater than or equal to zero.” The text describing ASIN(X) says, “X shall be of type real with a value that satisfies the inequality |X| ≤ 1, or of type complex.” The text describing LOG(X) says, “If X is real, its value shall be greater than zero.” Many other intrinsic functions have limited domains, and those limits are specified explicitly. When “shall” is used in this way, the convention is that it places the responsibility for the restriction on the programmer. In the text describing INT(A), there are no “shall” clauses.
As noted earlier, the IEEE_INT() intrinsic does specify which signals are raised when the result cannot be represented. It is unclear what that implies, if anything, for the INT() intrinsic.
I not only read the parts of the standard that you indicted, but I quoted the parts that disagree with your conclusion.
The committee does discuss errors in the related IEEE_INT() intrinsic. As I stated before, it is unclear what that implies regarding INT(). It could imply that no errors should be returned, or it could imply that the result is processor dependent (although that is not stated explicitly, as it sometimes is for other intrinsic functions). It is difficult to infer intention from the absence of some particular statement.
You claim that an argument with |A|>HUGE(1)+iota is nonconforming. I have pointed out that that particular interpretation is based on ambiguous language in the standard, specifically whether the word “integer” means representable integer or mathematical integer.
Let me make the same argument a different way. Suppose that a processor returns HUGE(1)
for INT(A) when A>HUGE(1)
. That return value is 1) representable, 2) its numerical value does not exceed the value of the argument, and 3) it is the largest such representable integer. With those three features, it does not violate the general nonconforming restriction in 16.9.1, and it does not violate any further domain restrictions in 16.9.110. There are then two questions.
-
Does such a function satisfy the requirements of the standard?
-
Is this behavior required/mandated by the standard?
A programmer trying to write portable code that does not abort unexpectedly would want both of those answers to be yes. Anything less means that the programmer must go to extra effort to ensure his code does not abort unexpectedly and computes the same results with all processors.
Can you step through your reasoning process for this. I went through the process that arrives at the other conclusion.
How would you answer the two questions I posed before?
For me, it appears a bit more natural to interpret the above sentence as
`INT(A)` is the integral value whose magnitude
does not exceed the magnitude of `A` ...
than
`INT(A)` is the value of INTEGER type (with an optionally
specified KIND) whose magnitude does not exceed ...
(where the former is mathematical integer, and the latter is INTEGER type), though just my intuitive impression from the sentence…
FWIW, the man page of the trunc()
function in Julia gives a similar sentence, but seems more clear-cut about the meaning:
trunc([T,] x)
trunc(x) returns the nearest integral value of
the same type as x whose absolute value is
less than or equal to the absolute value of x.
trunc(T, x) converts the result to type T, throwing
an InexactError if the value is not representable.
(test)
> trunc( Int32, 2.1e9 )
2100000000
> trunc( Int32, 2.2e9 )
ERROR: InexactError: trunc(Int32, 2.2e9)
EDIT: Python/Numpy gives this result:
>>> int( 2.1e9 )
2100000000
>>> int( 2.2e9 )
2200000000
>>> np.int32( 2.1e9 )
2100000000
>>> np.int32( 2.2e9 )
-2094967296
The results of different compilers/options are like these (according to Compiler Explorer). So int() with too big arguments may be processor-dependent.
program main
implicit none
real :: x
x = 2.2e8
print *, int( 2.1e9 )
print *, int( x * 10 )
end
gfortran-13.2 -O0:
2100000000
-2147483648
gfortran-13.2 -O2:
2100000000
2147483647
ifort-2021.10 -standard-semantics -O0:
2100000000
-2147483648
ifort-2021.10 -standard-semantics -O2:
2100000000
-2147483648
ifx-2023.2.1 -standard-semantics -O0:
2100000000
-2147483648
ifx-2023.2.1 -standard-semantics -O2:
2100000000
0
With a check option attached, I get a floating-point error for gfortran:
gfortran-13.2 -ffpe-trap=invalid
indicating the line `print *, int( x * 10 )` at run time:
Program received signal SIGFPE: Floating-point exception
- erroneous arithmetic operation.
integer n ; n = x * 10
also gives the similar SIGFEP error (with the above option). I wonder if a similar check option is available for ifort and ifx also.