I don’t think there is a solution that satisfies authors, publishers and readers. The Springer method is a book with the purchasers name on every page. Periodically Jane and I do a web search and bring illegal copies to Springers attention. They get their legal team involved and eventually the illegal copies get taken down.
Got my copy today… Bol.com was quick!
I am especially interested in everything about coarrays. Vector syntax? Know it since the CDC Cyber 200/205 machines in 1986*. OpenMP needs the same thought process. Modules? Love them. Defined types? Done that. Linked lists? That too…
*) Though the length was specified with a semicolon or A(1;N) (adding two vectors A(1;N) = B(1;N + C(1;N)), with a maximum length of 2**16-1 = 65535. There were WHERE statements. I remember that I used the BIT data type regularly (bit vectors, so a word was a series of bits, which could be manipulated like in BitArray = A(1;100) .GT. 0 ). When vast amounts of comparisons were required, bit vectors were extremely efficient. Still wonder why they are not part of Modern Fortran.
Yes, so does everyone else.
I think the problem isn’t that they aren’t useful, but rather there are multiple models to choose from (some integer like, some logical like, some character like), and the language committee has never been able to choose from among them which to incorporate into the language.
The Cyber 205 was announced in 1980, and shipped the following year, so this was in the early to middle of the fortran 8x process. That is why the syntax did not match exactly with what we have now. I remember the 2**16-1 limit on vectors too, and I always wondered why they did that. There should at least have been a compiler directive to tell the compiler to code for possibly longer vectors in the next statement. Instead, the programmer had to code the nested loops explicitly – a long dot product was a loop around the vector statement and so on.
The same can be said for a lot of features common in other languages that for some reason haven’t found their way into Fortran. Re. the 205. If I remember correctly, you never actually used vector lengths of 65535. Using something like 65534 could be a lot faster. I don’t remember why though. Something similar on Crays where you never used the maximum vector length. The thing I remember most about the 205s is you had to write your code using their explicit vector syntax to get any usable performance increase because the automatic vectorization in their compiler was not anywhere near as good as Cray’s. Also, If I remember correctly it was a paged memory system that meant you had to be careful about how you structured your arrays and how you accessed them to avoid excessive paging.
Don’t forget all the magical Q8 calls.
One of my colleagues for many years had previously been at CDC. After the 7600 project ended, he worked on Cyber 205. He told me, tongue in cheek, that some programs had so little in common with their original source that the only things that statements left were the “program” statement at the start, and the “end” statement at the end.
Somewhere here in my “archives” (i.e., all the ancient computer docs my wife would love if I threw in a dumpster), there is a copy of a CDC white paper on Cyber 205 programming. I think I found out about it from a reference in Mike Metcalfs book on Fortran Optimization. A contact at CDC forwarded me a copy. I really should scan it and maybe Al Kossow would put it on bitsavers. It is fascinating reading.
A generalization of this argument was one of the reasons that some of the established vendors opposed fortran 8x and eventually f90. They thought that if the language supported array syntax directly, then their competitors would benefit without spending the years of effort necessary to write optimizing compilers for ordinary loop structures. This decade of infighting among the vendors through the 1980s is the reason fortran dropped from the dominant scientific and engineering programming language to its present 1.3%.
I’m not sure this was a motivation. And if it was, the experience shows that they were completely wrong, because in practice it took years for most of the compilers to always get the same level of performances with the array syntax compared to classical loops. Optimizing array syntax statements is not as easy as it seems.
While I agree this played a part, I think a bigger reason was that Fortran was to many people just not “cool” anymore. It didn’t have all the new bells and whistles (OO programming although experience has shown that OO is about 20% useful ideas and 80% hype) that other languages had, Its biggest problem to me is there was no attempt (probably because there was no economic incentive to do so) to build an ecosystem around Fortran in terms of intrinsic support for modern ADTs (lists, dictionaries, hash tables etc) as well as lack of support for static polymorphism (ie generics) and a wealth of readily available libraries beyond the LAPACK, ARPACK etc. Also, the lack of tooling (IDEs, profilers etc) that other languages enjoyed. I have no idea who is to blame for the state Fortran is in now but it does seem to me that the vendor’s still have the loudest voice when it comes to what actually goes into the standard. Just my 2 cents.
Brian Meeks paper is worth a read. A copy can be found on our web site. Fortranplus | Fortran information
The Fortran (not the foresight) saga!
brian_meeks_fortran_saga.pdf
Adobe Acrobat document [50.2 KB]
An article written by Brian Meek in 1990 on the development of Fortran
Did you see the comment that I was replying to?
So already, by 1982 or so, we had examples of compilers that could compile array syntax more efficiently than they could compile and optimize standard do loops.
The feature that was different with the eventual f90 was the semantics of these array statements. If I remember correctly, the cyber 205 compilers did not require expressions to be evaluated “as if” the entire right hand side (rhs) were evaluated first, and then assigned to the left hand side (lhs). That might seem like a trivial detail, but it means that the compiler never needs to create temporary arrays to store intermediate results within vector expressions. The cyber 205 vector expressions themselves were very close to the instruction set (which was based on pipelined memory-to-memory operations, not vector registers like the cray). It was the programmer’s responsibility to avoid any memory access conflicts between the rhs and the lhs of the array expressions.
This is more or less the difference in modern fortran between forall
and do concurrent
. The former is really just an array expression with the f90 “as if” array expression semantics, while the latter is a looping construct that can be evaluated in arbitrary order. The programmer is responsible for enforcing the memory access constraints in the latter which do not apply to the former.
Maybe my memory is wrong about this. I only programmed a little with the cyber 205 and later with the ETA fortran compilers, so hopefully others can correct any misstatements about this detail.
There was this tool, if I recall it correctly, called VAST. It translated the source into a “vectorised” source. So one was able to inspect how the vectorised statements looked like and could be tweaked if needed before passing it to the compiler. I never liked the “this loop has been vectorised” message. One method of avoid memory access conflicts was instead of working column by column, switching to a red-black version, first all red columns, then the black ones.
The Star100/205/ETA systems were memory to memory machines (a hoover and garden hose system) with relative long startup times for the pipelines. A code had to be suitable for that architecture (and could excel if linked triads* were used). The Cray’s of those days ran faster (faster clock) and the startup overhead of vector registers was considerably lower. Hence any code ran faster on a X-MP and giving Cray the edge.
*) Linked triad: One or two of the input operands are vectors and one of the two operators is a floating-point multiply, and the other is a floating-point add or subtract. All packed into one instruction.
Yes, described here, which says VAST/77to90 could translate
subroutine demo(a,b,c,n)
dimension a(n), b(n), c(n)
common /ecom/scratch(10000)
do 100 i = 1, n
a(i) = b(i) + c(i)
if (a(i) .gt. 100.0) then
a(i) = a(i) + scratch(i)
go to 100
end if
c(i) = a(i)*2
100 continue
end
to
module Vecom
real, dimension(10000) :: scratch
end module Vecom
subroutine demo(a, b, c, n)
!---------------------------
! Modules
!---------------------------
USE Vecom
implicit none
!---------------------------
! Dummy Arguments
!---------------------------
integer n
real, dimension(n) :: a, b,c
!---------------------------
! Local Variables
!---------------------------
integer :: i
!---------------------------
a = b + c
where (a > 100.0 )
a = a + scratch(:n)
elsewhere
c = a*2
end where
end subroutine demo
but the product was discontinued many years ago. There is still so much FORTRAN code that could benefit from a tool like this, for clarity if not for speed.
This doesn’t prove a lot, IMO. Maybe they put a lot of efforts to develop a performant compiler with their array syntax, and consequently not that much on the loop optimizer.
Yes, this makes a important difference at the end, as temporary arrays are perf-killers in array syntaxes.
But this is somehow not a fair game: if I understand correctly, they got these performances based on a strong assumption that the lhs could be updated on the fly. But similarly, assuming that there is no dependency in a classical loop makes the work of a loop optimizer much easier.
I remember looking forward to the array syntax in F90. However as it turns out, I don’t use it that much. Far, far, more important has been modules, derived types, and dynamic memory (automatics, allocatables, pointers/targets, etc).
It is kind of depressing reading old documents describing the fights in the Fortran committee back in the 1970s and 1980s over the tension between standardizing existing practice vs developing new capabilities. They seemed to have learned little from the better features of ALGOL 60 and it’s descendants (e.g., ALGOL-W, SIMULA 67, PASCAL, etc) until it was nearly too late. One survey by Loren Meissner back in the mid-'70s cataloged over 50 Fortran pre-processors in use to provide “structured programming” constructs. Yet block structured IF/ELSEIF/ELSE/ENDIF weren’t even in the first drafts of Fortran 77, and barely made it in the final version. (That said, F77 was still a huge improvement over F66.)
I found the CDC white paper. It is called “An Informal Approach to Number Crunching on the Cyber 203/205”, by Bjorn Mossberg - dated from 1981. Unfortunately a google search turns up empty. So it really does need to be scanned for posterity.
According to this document, the DEC and IBM opposition to fortran 8x dates back to 1984. I was not aware of it at that time, I only heard about it around 1987, during the public review period. Somewhere, I have a stack of copies of transparencies (this was before powerpoint was popular) that DEC was using to convince their customers that f8x was a bad idea.