A: 7.1.3 The set of valid values for complex consists of the set of all the combinations of the values of the real and imaginary parts
B: 7.4.3.3 (1) The complex type has values that approximate the mathematical complex numbers. The values of a complex type are ordered pairs of real values. The first real value is called the real part, and the second real value is called the imaginary part.
C: 7.4.3.3 (2) Each approximation method used to represent data entities of type real shall be available for both the real and imaginary parts of a data entity of type complex. The (default integer) kind type parameter KIND for a complex entity specifies for both parts the real approximation method characterized by this kind type parameter value.
D: 19.5.3.2 (2) (2) a nonpointer scalar object that is double precision real or default complex occupies two contiguous numeric storage units
E: 19.6.5 (13+14) When a default complex entity becomes defined, all partially associated default real entities become defined. When both parts of a default complex entity become defined as a result of partially associated default real or default complex entities becoming defined, the default complex entity becomes defined
Given all this, I canāt see how a compiler could implement a complex number other than by storing the the real and imaginary components with the same kind. In other words the answer is yes for the questions (1) and (2). This excludes different number of bits between the two components, or a storage of the modulus+angle, or whateverā¦
Regarding question (3) Iām not convinced. Citation (B) talks about an ordered pair, the first value being the real component, but without any reference to the storage. i.e. nothing really says that r(1) and c%re share the same numeric storage unit:
Iām not sureā¦ For instance, the integer model in the standard talks about ordered (indexed) bits, but without specifying how they actually stored. And as a matter of fact they are not stored in the same order on big-endian and on little-endian machines.
Well, so maybe thatās the reason for declaring equivalence obsolescent.
On the other hand, implementing complex the backwards way would probably break quite a few old codes.
Sure, and no compiler vendor would do that now. All the more than there would be no specific advantage by storing the other way.
But at some point I think that this de-facto standard should be clearly integrated in the standard one way or another. And that (this is actually my point) a real pointer should be allowed to point a complex target, and vice-versa (and this requires a non ambiguous storage order).
Sorry for the tangent, but it just occurred to me that Fortran does not have a type for the Gaussian integers (basically complex numbers but only with integer components)
Iām not sure how many applications Gaussian integers have, but it seems strange for a language to have a complex (float) type and not a complex integer type
program test
implicit none
complex, target :: c
real, pointer :: rr, ri
rr => c%re
ri => c%im
c = (1.0, 2.0)
print *, rr ! ===> 1.00000000
end program test
But at some point I think that this de-facto standard should be clearly integrated in the standard one way or another. And that (this is actually my point) a real pointer should be allowed to point a complex target, and vice-versa (and this requires a non ambiguous storage order).
Agreed. We use this in our code:
do j=1,n
do i=1,j-1
c(i,j)=c(i,j)+zdotc(l,a(1,i),1,b(1,j),1)
end do
c(j,j)=c(j,j)+ddot(2*l,a(1,j),1,b(1,j),1)
end do
to set up a Hermitian matrix. The diagonal elements are real so ddot can be used as a speed-up. However, it relies on consecutive storage of real and imaginary parts. Also, you have to āhideā this from the compiler by avoiding an explicit interface or using ddot with real arrays in the same routine. Most unsatisfactory.
One wonders why this is limited to a default complex entity. Shouldnāt it apply equally to a complex entity of any KIND?
In f77, all possible combinations of storage sequence association involving integer, real, logical, double precision, and complex were defined for scalars and arrays with equivalence and for common blocks. F77 avoided the storage sequence association of character and noncharacter entities. F90 and later introduced the KIND system for all of these intrinsic data types, but these later standards have tended to avoid the long discussions of storage sequence association, hence my question above about the apparently intentional use of ādefaultā.
Yes, but it does not cover the typical use case where one has a real array, on which a real-to-complex FFT is performed in place. Before FFT and after inverse FFT one wants to deal with real number, or with complex numbers in between. Thereās a trick with c2f_pointer() but this is not really satisfactory (and non standard AFAIK):
program test
implicit none
complex, pointer :: c(:)
real, allocatable, target :: r(:)
!...
call rfft(r)
call c2f_pointer(c_loc(r),c,[size(r)/2])
!...
end program test
3 | equivalence (r,c)
| 1
Error: EQUIVALENCE attribute conflicts with ALLOCATABLE attribute in 'r' at (1)
And anyway, using equivalence for this use case is also a trick that is not formally supported by the standard. IMO the standard should clearly state the storage association between REAL and COMPLEX, and provide specific intrinsics:
real, allocatable :: r(:)
complex, pointer :: c(:)
allocate( r(n) )
c => cmplx_pointer(r)
end
and the other way:
real, pointer :: r(:)
complex, allocatable :: c(:)
allocate( c(n) )
r => real_pointer(c)
end
The standard explictely says (18.2.3.3 of the F2018 standard) that in call c_f_pointer(c_loc(r),c,[size(r)/2]) r and c shall have the same type, which is not the case here.
True, I canāt see either what could go wrong, but relying on a non-standard feature is not really satisfactory. And relying on iso_c_binding while no C code at all is involved is not really satisfactory either.
The function is called c_f_pointer, before anyone gets confused.
Otherwise, I fully agree with your points. This use case is common enough that it warrants some type of support from the standard, beyond using C binding.
Itās nice that the standard allows accessing the strided sub-array (or is this just for scalars?):
rr => c%re
ri => c%im
Maybe the syntax could be:
c(1:size(r)/2) => complex :: r
What would happen in case size(r) is an odd number? Should the standard require that the size of r is even, if a complex pointer is to be associated with a real target? Would it be okay to leave one ādanglingā element?
Ah, you are right. It seems like there should be exceptions for the cases where storage sequence association is already defined by the standard. In this case, the KIND values even match.
Using C binding features without any actual C code bothered me too 20 years ago (f2003). Now I donāt even think about it. Iām just glad it is there.
It is allowed for arrays, however I remember a discussion from a couple of years ago where some compilers do copy-in/copy-out argument association instead of the array slice association, so there might be some performance/efficiency issues with using %re and %im for arrays.
Apart that itās a kind of "thank god, we have the C stuff to get Fortran working ", having a full Fortran support would be safer. c_f_pointer is convenient but you are on your own (no possibly check of the kinds, of the contiguity, etcā¦)
My answer is no. The memory system is abstracted away in Fortran. There is no requirement to map the concepts of Fortran to a linearly addressed store. We can run Fortran programs in our heads and I donāt think we have such a store. Having said that, the sanest way of implementing the rules of storage association is what we usually find in implementations, because the available hardware has a linearly addressed store.
Hereās a basic rule of storage association:
When a scalar variable of intrinsic type becomess (sic) defined, all totally associated variables of different type become undefined.
Your āA: 7.1.3ā and āB: 7.4.3.3 (1)ā are about the set of values and the ordering mentioned applies to syntactic forms in the language, such as the literal (1.0,2.0).
Your āC: 7.4.3.3 (2)ā is about the relation of a REAL KIND to a COMPLEX KIND.
Your āD: 19.5.3.2 (2)ā specifies the accounting of storage sequences (number and type) for COMPLEX. Nothing more.
Your āE: 19.6.5 (13+14)ā specifies which real (or complex) entities become defined when associated complex (or real) entities become defined, but does not specify what value these entities acquire.
nothing really says that r(1) and c%re share the same numeric storage unit:
Indeed, the statement
EQUIVALENCE(r(1),c%re)
is not allowed (complex-part-designator is not one of the options listed for equivalence-object).
I am inclined to believe that storing complex values as 64 interleaved bits from an IEEE bin32 encoding modulus and a 32-bit integer encoding cos(Īø) may be insane but doable.
This is what is I was thinking too, until I realized in this thread the implications of
rr => c(:)%re
ri => c(:)%im
This seems to be valid Fortran, and I canāt see how it could work if a complex was not stored as non interleaved real and imaginary parts.
Besides, by storing the modulus + angle, I canāt see how (A), (B), and (C) could be honored. For instance, if both cmplx(huge(0.0),0.0) and cmplx(0.0, huge(0.0)) are valid complex numbers, (A) says that cmplx(huge(0.0),huge(0.0)) must be a valid complex number too. This is not possible with a modulus+angle storage.
It looks like you can either have the TARGET attribute or be able to participate in EQUIVALENCE, but not both. I think you can follow the āinsaneā spec above and satisfy the requirements of the Standard, but you would have to have more room in the runtime baggage of POINTER variables to handle the case where you want to remember that you are supposed to be converting to Cartesian coordinates and take the real-coordinate when you have to supply the target. In principle there is no limit to the amount of semantic information you can carry along at runtime. Efficiency dictates where you stop.