A software engineer's circuitous journey to calculate eigenvalues

“So, instead of writing routines from scratch via textbooks, we sometimes resort to mechanically translating an old distribution of LAPACK, written in FORTRAN 77, into Common Lisp. Due to the age of the routines, I personally think it’s prudent to minimize its usage.”

1 Like

This reminded me of an EW Dijkstra essay.

Blockquote
Unfathomed misunderstanding is further revealed by the term “software maintenance”, as a result of which many people continue to believe that programs —and even programming languages themselves— are subject to wear and tear. Your car needs maintenance too, doesn’t it? Famous is the story of the oil company that believed that its PASCAL programs did not last as long as its FORTRAN programs “because PASCAL was not maintained”.

I don’t know why it isn’t simply linked to instead of translation. Edit: it is done to reduce a dependency. I can respect that.

3 Likes

While the algorithms remain the same, there actually is some Wear and Tear in software.

  • Changing software / hardware environment
  • Changing requirements of the users of the software.

So remaining the same can de facto already have the effect of wear an tear.

2 Likes

For run-of-the-mill software (ie. CRUD – Create, Read, Update, Delete), you have a point on changing requirements.

For numerical linear algebra library, the only one that makes sense is a hardware change. The only requirements for a numerical package is to efficiently and correctly compute outputs from inputs. How the results are used is outside the scope.

But that only means a compiler needs a back end for the new platform. Algorithms don’t depreciate as capital equipment does. So I think Dijkstra had a point.

3 Likes

just in the past 5 or so years, float16 has risen to prominence in ML and some simulation, and fma has become ubiquitous, so I think there’s definitely still a fairly high rate of hardware changes.

I am struggling to understand the significance of discussing FMA when relating to Fortran code maintenance.
There is little control of simd instructions in Fortran codes I develop. It is all “attempted” in compiler options.

For gfortran, my compute default options are “-O3 -march=native -ffast-math”, where I understand -O3 enables many AVX/simd instructions.
I have also had recommendations of using -O3 -mno-fma, even for daxpy, although the change was insignificant. I find explicit control of FMA instructions to be confusing and ineffective for the intel/amd processors I have used.

Why the focus on FMA with Fortran ?

The itterative eigensolver I use in my FEA code is based on a version provided by Ed Wilson in 1981, which I have enhanced for OpenMP. I am amazed at the speed and precision it continues to achieve.

Unfortunately I don’t have experience of the changes that have been achieved by Lapack et al. Do these codes achieve explicit simd instruction control ?

Changing requirements of the users can also be found in linear algebra. If you take a look at BLIS, more specifically the section Why should I use BLIS instead of GotoBLAS / OpenBLAS / ATLAS / MKL / ESSL / ACML / Accelerate?, the following requirements weren’t met by the BLAS Fortran interface:

  • generalized matrix storage (e.g. row-major for C and derivative languages)
  • enhanced complex domain support
  • enhanced threading support
  • support for mixed-precision operations

For sparse linear algebra libraries, Fortran codes can be difficult to use because of the assumption of 1-based indexing. Some vendors offer packages where you can select between zero- and one-based indexing at the price of some code bloat (extra dummy argument in every call, condition checks and index adjustment in the library).

Based on what I’ve read on arrays in the Lisp Cookbook, Lisp uses zero-based indexing and row-major indexing, so directly re-using Fortran routines would likely feel unidiomatic to the common Lisp user.

I had to rewrite quite some numerical routines to use them efficiently in a parallelized code. In the end, numerical libraries are useful as off-the-shelf tools only to a certain extent. In a highly optimized numerical code one should always at least a review what libraries are doing.

1 Like

Blockquote
Based on what I’ve read on arrays in the Lisp Cookbook 2, Lisp uses zero-based indexing and row-major indexing, so directly re-using Fortran routines would likely feel unidiomatic to the common Lisp user.

It should not be terribly hard to write Lisp macros that take care of that. From the original post, the key issue was eliminating a foreign dependency, which I can respect.

1 Like