If you’re going the implicit none way note that Modern Fortran Explained (2023) recommends using implicit none(type,external) not just implicit none(external) and note also that this is a F2018 feature. Some compilers won’t have implemented it yet.
“Visions of Turbulence” is a computational physics fluid dynamics simulation made with a custom FORTRAN-based numerical solver and visualised using a custom Python-based rendering engine and a custom colourmap. Despite being only ten seconds long, the video took approximately eight days to render on a single-threaded implementation of the software with a resolution of 4k (3840x2160 cells) at 60fps. Technical Details: The video is demonstrating a well-known test case to highlight the Kelvin-Helmholtz instability that arises when two gases move in opposite directions to each other. The particular mathematical model used is the two-dimensional planar compressible Euler equations with no source terms. The numerical solver is a Weighted Essentially Non-Oscillatory (WENO) 5th order spatial discretisation scheme, with a Strong Stability-Preserving Runge-Kutta 3rd order (SSPRK3) time discretisation scheme. The colourmap is scaled to demonstrate the fluid density, with red represnting higher densities while blue represents lower densities.
We take stacks and heaps for granted nowadays, but back in the very old days of computing, computers operated without a stack or a heap.
Tell a recent college graduate this, and you may as well tell them that there was a time when you didn’t have instant access to millions of cat videos.
It’s not too hard to imagine computing without dynamic memory allocation. You just have to use fixed-size memory buffers for everything. If you have to operate on variable-sized data, you reserved a fixed-size buffer of some capacity that is large enough to accommodate any data you would reasonably be expected to process, and if somebody asked for more, you just exited the program with a fatal error. If you were really nice, you would provide a compile-time configuration so your clients could adjust the maximum capacity to suit their datasets. And if you were really fancy, you wrote a custom allocator that operated on that fixed-size buffer so people could “allocate” and “free” memory from the buffer.
But operating without a stack? How did you call a function if you didn’t have a stack for the return address or local variables?
Here’s how it worked.
First, the compiler defined a secret global variable for each inbound function parameter, plus another secret global variable for each function to hold the return address. It also defined a secret global variable for each of the function’s local variables.
I own a few books by David Barron, David W. Barron - Wikipedia, one of them being a description of operating systems and how useful they are. Talk of things that we all take for granted
That’s amazing. It’s interesting to ask the AI what words are similar to Fortran.
Check that it will be:
Some words similar to Fortran include programming language, computer programming, computer algorithms, scientific computing, and numerical analysis.
And next joke if you animated pointer talk with community.
That are fun make Asteroid comedy show - For Translate to NASA or Roscosmos Comedy Club cash!
By “similar”, I would have expected something like Forban But it is a good thing to make Fortran a synonym of all those terms…
Well, I don’t understand the sentence, but indeed Fortran has its own asteroid (9548) Fortran:
(that was the subject of the original post of this thread)
Interestingly, the first observation was the 1954-09-04, the year the Backus’ team started to work on the FORTRAN language/compiler. Is this the reason why they named it Fortran?
In fact it was just observed two times at Palomar Mountain in 1954. Then discovered in 1985. Maybe these 1954 observations were found later on old photographic plates?
There is a 1-hour video by Ken Taylor on the Cray 1 Supercomputer that also covers the history of computing.
At 40:09 it says that the programming languages available were assembly and Fortran and that the applications were “Aircraft simulation, Astrophysics, Automotive design, Computational Chemistry, Crash Analysis, Structural Analysis, Movie making, Graphic, Seismic, Fluid Dynamics, Electronics etc.”
There is a Cray History site “where the entire timeline of supercomputers that bore the Cray name are represented” and a good Wikipedia article on Seymour Cray.
I had the priviledge of hearing Seymour Cray speak at a Cray Engineering and Science Symposium around 1983 or 1984 (can’t remember the exact year). Two things I remember about his talk was he said that micro-processor fabrication processes had advanced to the point where Cray could shrink most of the Cray 1 system down to one or two chips and could fit on your desktop. He said however that he didn’t see a market for such a system. The second thing he said that has stayed with me after all these years is that Cray spent as much money developing the first Cray Fortran compiler as they did developing the vector processing hardware for the Cray 1. Imagine where Fortran would be if the Intel’s, AMDs, Nvidias, and IBMs of the world would invest even a third as much they spend on developing a new CPU on their Fortran compilers
Indeed a few years later, Cray Research developed the J-90 system - which had a Y-MP (based on the CRAY-1) instruction set. Each J-90 CPU was composed of two main chips - one for the scalar portion, and one for the vector portion. IBM’s CMOS chip fab was used for the chips. One of the very few non-IBM organizations that were allowed to use it.
Interestingly, the J-90 CPUs also included a small data cache for scalar loads/stores. (Cray machines have always had instruction caching. Even way back to the CDC 6600.) You could turn the scalar data cache on/off to compare times, and use the hardware performance monitor to look at hit/miss rates. Really helped on some scalar-dominated codes.
IIRC, the follow-on to the J-90, the SV-1, had both scalar and vector portions on a single chip. It also had a larger data cache for both scalar and vector references. (L2 for scalar references, L1 for vector references.) Unfortunately it came out after the Cray-SGI merger, and by then I was mostly working on the SGI Origin stuff. So I never had much of a chance to play with it.
I had access to an SV-1 for a while. I remember it being an attempt to merge classic Cray vector computing with the massively multi-threaded/multi-streaming technology that Cray acquired when they bought (merged ?) Burton Smith’s Tera Computing company which produced a really interesting machine called the MTA. I think Cray released a version of the MTA under the Cray name. One thing I remember about the SV-1 is that Cray kept the default 64 bit real and integers from the Cray C-90/XMP days in their Fortran compiler but for some reason use a default 32 bit integer in their C compiler. This made trying to make some software like netCDF that passed a lot of information as 32 bit integers work on the SV-1 a challenge.
The SV-1 was developed before that portion of the previous Cray Research was spun off by SGI. CRI-SGI merger was in 1996, and the ‘vector’ portion (which also included on-going T3E follow-on) was spun off from SGI 3-4 years later. So at least initially, there was no Tera influence on it.
My regret from those years was when the high-end SGI graphics folks left SGI to start Nvidia. (This after a failed attempt to do some joint development work with Microsoft on graphics standards.) Based on the people who left, I knew Nvidia would be successful. But failed to buy any NVDA stock early on…
Thanks for the Ken Taylor link. I’ll have to watch it.
Yes, initially it was just CAL (Cray Assembly Language) and CFT (Cray Fortran). The CAL assembler was very similar to the CDC COMPASS assembler - because the fellow who wrote it was ex-CDC and used the COMPASS Reference Manual as a guide. The CFT compiler was a beast. Over 100K lines of CAL - all in a single IDENT. In fact it broke the UPDATE source code management program when it hit 131K LOC.
Outside of the OS kernel, much of the rest of the Cray OS was written in Fortran. Some in a variant of Fortran called SKOL. (SKOL was a variant of the MORTRAN Fortran preprocessor.) Over at the DOE labs, much of their homegrown CTSS/LTSS system was written in the LRLTRAN variant of Fortran.
Around the time I joined CRI (1984) a PASCAL compiler had been obtained - I guess from the U of Minnesota - and were using it to develop the follow-on to CFT, CFT77. The CFT77 compiler was written in PASCAL, and was a globally optimizing compiler. Produced better scalar code than CFT from the start. But took some time to learn all of CFT’s vectorization tricks.
Also about the same time, the CRAY-2 group had obtained a copy of C from Bell Labs. (The CRAY-1 port was previously done by dmr himself.) They used it to port System 5 unix to a CRAY-XMP system to see how it would run. It did ok, though it needed a lot of work for multiple processors, networking, batch subsystem, etc. The decision was made to run a unix-based OS on the CRAY-2. This became UNICOS, and was then also done for the X-MP and Y-MP lineage systems.
Fortran-wise, there was a version of CFT early on that was modified for the CRAY-2 instruction set. But cft was on its way out, and cft77 took over.
Correct. I was thinking about the X1. However, according to Wikipedia both featured “multi-streaming” processors in addition to vector processors. Also, it was Tera that bought the remnants of Cray Research from SGI in 2000 and renamed itself Cray, Inc. The X1 was introduced in 2003.
Also, I have fond memories of the original CFT. It was amazingly fast for the time at parsing code and generating an executable. I also remember the PASCAL based compiler due to the order of magnitude (at least it felt that way) slow down in compilation speed wrt CFT. Also, for years I thought the PASCAL based version of CFT was just an “urban legend” because I couldn’t imagine at the time anyone writing a compiler in PASCAL and there was some other reason (maybe the move to UNICOS) that was the reason for the slowdown.
Not only was CFT77 written in PASCAL, but Version 2 of the CAL assembler was also written in PASCAL.
It was just a point in time where OS and compiler developers were coming to terms with writing their products in higher level languages, rather than assembly code. PASCAL fit the bill, though C came shortly thereafter. The CRAY-2 had a somewhat different instruction set than the CRAY-1/X-MP/Y-MP - so writing in a higher level language as much as possible was helpful there as well. The cft90 front end was written in C, and gradually the back end was rewritten in C.
At 80 years old, Earl Einhorn is a living legend, known as the Father of Fortran Art he is one of the true pioneers of the entire computer programmed art world.
Earl Einhorn began studying Electrical Engineering at CCNY but ended up in Mathematics. He started programming in Fortran in 1968 as the Master Actuarial Programmer for Equitable Life
In 1989, obsessed with marrying his education with art,he started using the Fortran language to write the first of what would become over thirty different computer art programs he has developed.
“My love was always for art and I hoped I could produce work that no one had ever seen,” says Einhorn. “I started early, writing Fortran with the Calcomp Pen plotter, creating schematic drawings in color in 1974.”
I found a way of making extreme-resolution pictures. The resolution I use today, of 12,000 by 15,000 pixels, could give me a print 40 inches by 50 inches printed at 300 DPI.
An uncompressed RGB image should therefore have a respectable 540 MB size.
Although we can also read:
Known as the Father of Fortran Art and one of the pioneers of generative art, Earl started using the Fortran programming language to make his art in 1989. All of his work is created by one of thirty computer programs he has developed & then colorized by hand in Photoshop.
I don’t know if the Fortran image is B&W or using grey levels?
We could invite him on the Discourse…
UPDATED: done, invitation sent.