The problem is that these classic codes, while great, are not perfect. They are written in the obsolete FORTRAN 77 fixed-form style, which nobody wants anything to do with nowadays, but that continues to poison the Fortran ecosystem. They are littered with now-unnecessary and confusing spaghetti code constructs such as GOTOs and arithmetic IF statements. They are not easy to incorporate into modern codes (there is no package manager for Netlib). Development could have continued up to the present day, and each of these libraries could have state of the art, modern Fortran implementations of both the classic and the latest algorithms. Well it’s not too late. We have the internet now, and ways to collaborate on code (e.g, GitHub). We can restart development of some of these libraries:
Here are the Fortran “packs” I know of (allowing other spellings). In several cases there already modernized versions, as listed in Jacob’s post. Any others?
acepack – ACE and AVAS methods for choosing regression transformations alfpack – normalized associated Legendre functions of the first kind arpack – large scale eigenvalue problems bayespack – integration for Bayesian Inference biepack – Boundary Integral Equation Package bifpack – Bifurcation, Continuation and Stability Analysis calpak – Calendar Calculations
charpak – character/string manipulation (link?) chrpak – handles characters and strings clawpack – finite volume methods for linear and nonlinear hyperbolic systems of conservation laws codepack – computes and compares “codes” for graphs, directed graphs, multigraphs, and other generalizations of an abstract graph cubpack – estimates the integral of a function (or vector of functions) over a collection of N-dimensional hyperrectangles and simplices daepak and manpak – Differential algebraic equations daspk – differential-algebraic system solver datapac – statistics ellpack – elliptic partial differential equations eispack – eigenvalues and eigenvectors fcnpak – Associated Legendre Functions and Normalized Legendre Polynomials fishpack – Poisson Equation Solver fitpack – curve and surface fitting with splines and tensor product splines fftpack – fast Fourier transform of periodic and other symmetric sequences funpack – special functions gcvpack – Generalized Cross Validation to fit splines gempack – General Equilibrium Modelling (economics) grafpack – common calculations involving (abstract mathematical) graphs grkpack – fitting smoothing spline ANOVA models for exponential families hompack90 – solving nonlinear systems of equations by homotopy methods icepack – sea-ice column physics ilupack – multilevel ILU preconditioners for general real and complex matrices as well as real and complex symmetric (Hermitian) positive definite systems. iqpack – weights of interpolatory quadratures itpack – solving large sparse linear systems by adaptive accelerated iterative algorithms lapack – linear algebra laupack – operations on mathematical graphs linpack – solve linear equations and linear least-squares problems minpack – solves systems of nonlinear equations, or carries out the least squares minimization of the residual of a set of linear or nonlinear equations mudpack – multigrid iteration for solving real or complex elliptic partial differential equations mvnpack – numerical computation of multivariate normal integrals napack – numerical linear algebra and optimization odepack – solvers for ordinary differential equations odrpack – Weighted Orthogonal Distance Regression orderpack – Unconditional, Unique, and Partial Ranking, Sorting, and Permutation polypack – NCAR Graphics Routines to Manipulate Polygons polpak – evaluate mathematical functions, including special polynomials pppack – evaluates piecewise polynomial functions, including cubic splines propack – compute the singular value decomposition of large and sparse or structured matrices quadpack – numerical integration regridpack – interpolating values between one-, two-, three-, and four-dimensional arrays defined on uniform or nonuniform orthogonal grids rkpack – Gaussian regression using smoothing splines scalapack – high-performance linear algebra routines for parallel distributed memory machines simpack – approximates the integral of a vector of functions over a multidimensional simplex sparsepak – solves large sparse systems of linear equations spherepack – perform spherical harmonic transforms and compute spherical differential operators starpac – Standards Time Series and Regression Package statpack – Fortran 95/2003 multi-threaded library for solving the most commonly occurring mathematical and statistical problems in the processing of climate model outputs and datasets and more generally in the analysis of huge datasets stripack – Delaunay triangulation and Voronoi diagram on the surface of a sphere stspac – statistics, linear algebra, and other numerical procedures subpak – utility library svdpack – iterative methods for computing the singular value decomposition of large sparse matrices testpack – Testing Multidimensional Integration Routines tlcpack – interpolating values between one-, two-, three-, and four-dimensional arrays defined on uniform and nonuniform orthogonal grids tnpack – Truncated-Newton optimization package for multivariate nonlinear unconstrained problems treepack – common calculations involving a special kind of graph known as a tree tripack – Delaunay triangulation of a set of points in the plane tspack – construct a smooth function which interpolates a discrete set of data points Fortran interface to C umfpack – solve unsymmetric sparse linear systems vfftpack – fast Fourier transform of multiple real sequences wiener pack – Computing Probabilities Associated with Wiener and Brownian Bridge Processes
Having done my own refactoring of several old codes in the past I applaud Jacob and others for bringing this topic forward. (I vote for making Jacob the “leader of the packs” . Based on my own experience I would caution against going overboard on trying to eliminate every GOTO etc. I’ve found that even what appears to be simple mods can change the results you get compared to the original code if you change the order of execution or how the compiler choses to optimize a section of code. In most cases this will only show up in the last one or two significant digits, but if your goal is for the modified code to replicate the old codes results exactly you might be disappointed.
I’m thinking it’s OK to keep them separate libraries, rather than merging them into stdlib? For example, SciPy is unlikely to want to pull in all of stdlib, but they may want to pull in some of these standalone libs.
For Fortran users, as long as they are easily discoverable, and usable via FPM, it doesn’t matter where they come from. Some care may need to be taken to ensure they don’t conflict in some way with other libs.
Over time, we can move more into fortran-lang GitHub, I think. Maybe those would have more of an air of being “official” rather than being in individual user’s accounts?
Yep, you are right, but I don’t think we should be constrained to reproducing the results exactly to every decimal point. So, as long as the changes are mathematically the same, they should be fair game (as long as they aren’t slowing it down). I think it’s more important to present “modern” code that we can hold up as an example of what Fortran should look like.
Every time a young person sees a computed GOTO, we have lost a potential Fortran programmer forever!
perhaps there should be a vetting process for registered fpm(1) packages that includes cloning a copy to a central repository, but I think most of these work as fpm(1) packages best. We seem to have lost a registered package (the cairo interface) and fpm-search? Carlos Une’s github site appears to be gone unless something is glitching. Along the lines of modernizing I was going to remove the L2 procedure used by ODEPACK with standard intrinsic NORM2() calls and had not modernized that particular routine (so it still has GOTO and ASSIGN and so on…) much; but got big enough differences in the answers that now I have to dig a lot deeper to see which is correct; those kinds of things can really complicate modernizing the code. The oddest one was when replacing a procedure called DCOPY with array syntax the code slowed significantly, which was a big surprise. Perhaps because a lot of arguments are passed as dimensioned to * still and so I used subscripts on the LHS might be getting an unexpected copy. Anyway, modernizing can hit unexpected snags; but no computed GOTO’s left. I REALLY need a bigger test suite before making any significant changes.
Jacob, I agree that all the old computed gotos, assigned gotos etc should be removed along with any branch up sphagetti code IF it can be done in a way that is consistent with the original code. I have seen some codes that were so hopelessy “spaghettified” that I was better off just going back to the original math/physics etc and rewriting everything from scratch. As I’ve stated in a previous thread there is still one very valuable use for GOTO and thats quickly exiting a section of code to the end of the routine to do error processing or sanely exiting the procedure. Over the years my position on GOTO has evolved to match a quote I saw attributed to Linus Torvald many years ago. “The problem with GO TO is not that it is inherently evil, its how its implemented in some languages”. That’s why I’ve advocated allowing statement labels in Fortran to be fully alphanumeric. If we have to have them in the language, I think that writing
GO TO error_controller
,
error_controller: If (error) then
etc
is a lot more palatable than
GO TO 113598
I have a similar opinion for labeled FORMAT statements
If the FORTRAN code has been translated to Matlab, R, Python/Numpy, or Julia, those could be a starting point for a modern Fortran code. In general, in addition to translating the packs to modern Fortran, there should also be modern Fortran versions of the important packages in those languages.
Regarding format strings, what I do is define format strings with mnemonic names such as
character (len=*) , parameter :: &
fmt_ci="(a30,':',*(1x,i6))", & ! character, then integers
fmt_cr="(a30,':',*(1x,f10.4))", & ! character, then reals
fmt_c="(a30,':',*(1x,a))" ! characters
and then while coding I know what format string to use without looking up for prior definitions.
I’ve used the suggested approach of embedding formats in character strings before and find it particularly useful if I need to modified the format during the run. However, what I’m proposing is more in the context of code refactoring to make old code more readable. I also find that in some instances, using an old fashion FORMAT statement is more convenient than trying to imbed the format string into the body of the READ or WRITE statement. Also at one time there was a reason you were taught to put all your formats before the first executable statement or at the end of the routine before the stop and/or return. I’ve forgotten exactly what that reason was/is but I think it had something to do with how compilers stored the FORMAT information and if it needed to do JMPs around it if the formats were in the body of the code. I’m probably wrong about that but thats what I remember.
Besides revising popular packs, priority could be given to revising packs that used deleted features and are no longer standard Fortran. Slatec is not, for example. A second priority is packs that use obsolescent features, even after automatic conversion to free source from the obsolescent fixed source form. For any code, two milestones would be (1) legal (2) non-obsolescent. Although work would go beyond these objectives, some conservative users who want to minimize the risk of new bugs may choose to use versions that meet these objectives, which should be accessible on GitHub
I think Modern Fortran has more readable variants that accomplish a similar thing. The block construct seems to offer one part of the feature - you can name the block construct, and just “exit [name”] to get out. Perhaps not as convenient as the GOTO, but much more maintainable. Other constructs can probably fill the gap.
Yes, you can also give a name to DO-loops and IF-blocks and use those to exit the construct. That is clearer and better organised than a statement label to jump to.
Of course, we should not forget that all control structures are the result of years of experimentation. The “modern” variants did not just fall down from the sky
In what sense is it more maintainable? Block introduces new scoping unit which can have many side effects. So changing the place to which one wants to go is surely much easier to achieve by moving the labeled continue than end block. Also if one forgets or does not bother to name the block, finding the destination may be even much harder.
I’ve always considered that FUD against goto somewhat ridiculous. Surely if overused, it will make your code crappy but sometimes it can be a simple useful tool.
I’m fully aware that you can use blocks and other named constructs to make simple replacements to GOTO. I went through the painful experience of trying to use several nested block constructs to replace logic that had multiple GOTOs that jumped to several print statements (each followed by a return). I managed to make it work but at the end of the day the resulting code was just as complicated as the original code. Just because something is classified “Modern” doesn’t make it better for SOME tasks. I’m not advocating ignoring all GOTOs when you refactor a code. But I’ve done enough of it (probably more than most people commenting in this thread) to know that it is superior to other methods in some (very few) cases and you shouldn’t try to jump through hoops to remove them if the amount of work required is not justified
These are the kind of instances where one should be able to construct a suitably small illustration of the situation and put them up on public forums for discussion and feedback. Chances are rather high some other reader(s) looking at it independently will come up with a program flow without GOTOs that will be deemed easier to read by most than the one with GOTO.