Although I have been familiar with fpm
and stdlib
for an extended period of time, I’ve started to take a keen interest in this discourse and its community only a year-ish ago. In all honesty, I have to say that it have had a transformative impact on my overall Fortran experience and re-ignited my hopes and desires of what a modern Fortran ecosystem for scientific computing could look like. There are so many people to thank for this that I will not even try to come up with an exhaustive list because I’ll more likely that not forget too many of you still. You all know who you are and what we owe you anyways, so thank you.
Over the past year, fpm
and stdlib
have become absolutely critical components of my Fortran toolbox. So critical actually that, while I used to prototype pretty much everything with a mix-bag of numpy
, scipy
and possibly Julia until last spring, I now do so pretty much exclusively in Fortran directly. Trying a new idea has become as simple as fpm new my_project
along with adding stdlib = "*"
to the dependencies in the toml
file and I’m good to go! This rejuvenated workflow is so simple and efficient that it largely contributed to my group and I being able to write new tools for our research that I wouldn’t have phantom about even just a couple of years ago.
With the one year mark of my involvement in this community approaching, I think it is a good time to actually take some time and reflect on a number of things, including:
- Why have
fpm
andstdlib
been so instrumental for me. - What exactly did
stdlib
enable me to do that I wouldn’t have been able to do (or with difficulty) even just a year ago? - My understanding of where
stdlib
stands today and how it could serve as the foundation block of the Fortran ecosystem for scientific computing. - What could be improved for new users or potential new contributors.
- More broadly, what is my point of view on the overall Fortran community and how to help it thrive.
If you made it this far you’ve probably understood that it’ll be a pretty long read. Before we get started, let me however quickly introduce myself and give a bird’s eye view of the environment I work in and what I use Fortran for. I’m not doing this for shameless self-promotion or whatsoever, but simply because it might shed some light on where the potential biases and opinions I have come from.
Who am I?
I have a background in numerical linear algebra, convex optimization and dynamical system theory. I did my PhD between 2010 and 2014 on solving very large-scale eigenvalue and singular value problems arising in hydrodynamic stability theory to study the first stages of the transition to turbulence in three-dimensional flow configurations. It is during this period that I have first been exposed to Fortran, working with the massively parallel spectral element solver Nek5000 written in F77. Since then, I dabbled with Python and Julia quite a bit as well but always got back to Fortran.
Since 2017, I am assistant professor of Applied Mathematics and Fluid Dynamics in Paris. The lab I’m part of, DynFluid, is specialized in high-performance computing for fluid dynamics. Most of my colleagues (all Fortraners) perform very large simulations of high-speed flows in the presence of shock waves (think supersonic, if not hypersonic, flows with thousands or tens of thousands of processors). Others are more interested in developping innovative numerical schemes or computational solvers. One thing we are really proud of is dNami which takes as input a symbolic partial differential equation and automatically generates high-performance code to solve it. Nicolas heavily relies on SymPy for the preprocessing part and has to thank @certik for that again.
In contrast, I am more interested in model order reduction to obtain reasonably accurate but blazing fast surrogate models for multi-query problems such as aerodynamic shape optimization. My personal preference however goes to feedback control and flow estimation problems which heavily rely on numerical linear algebra and convex optimization. It is in this context that I’ve started to work on LightKrylov. The goal was to develop a Fortran-based library making use of abstract types to implement a variety of linear solvers and easy to interface with. We then built on top of that LightROM to solve very high-dimensional Lyapunov and Riccati equations which are ubiquituous in control theory. My interest in modernizing the quadprog
solver also comes from there: now that we can obtain reduced-order models of very large-scale simulations, we want to solve the optimal/robust control in real-time and needed a really efficient solver for convex quadratic programming. Along the years, I’ve also work quite a lot with Steven Brunton from UW, Seattle, on sparse regression problems for data-driven identification of nonlinear dynamical systems and contributed to the creation of pySINDy, a Python package dedicated just for that and fully compatible with scikit-learn
.
I think that’s all I have to say on this subject. Back to Fortran now.
I’ll try to structure this post by roughly following the bullet points I’ve mentioned before. I however often have a pretty erratic thought process so I apologize in advance for anything that may look inconsistent or out of place.
Why have fpm
and stdlib
been so instrumental to me?
I started my Fortran journey 15-ish years ago with a code handed to me written in F77, version-controled with svn
and a whole lot of makefiles. And boy do I hate writing makefiles!
Don’t get me wrong though. Makefiles are great once your project is in a somewhat finalized stage but I find them horrible when you do rapid prototyping or while your project in under rapid development with a changing structure. This is where fpm
and the ecosystem created around it have changed things for the better, as far as I’m concerned. No more fooling around with makefiles, cryptic error messages, problems linking external libraries and what not.
Testing a new idea is now as simple as fpm new my_project
, a couple of lines added to the fpm.toml
, a handful of minutes programming and fpm run
. Boom, done. The drastic simplification of my workflow was the main selling point for me. Additionally, it made things so simple that I am almost done convincing my colleagues to reintroduce Fortran in their numerical methods classes. I’m not quite there yet but it’s damn close.
Although I mostly use its linear algebra module, stdlib
has been equally important to me. While I’ve used lapack
quite extensively over the years, I always had to check the online documentation to make sure I referenced the correct subroutine and its arguments. Once again, lapack
is awesome and most of what I’ve ever done relies heavily on it, but it is a pain when you just want to try out an idea. This is where I usually turned to the numpy
/scipy
combo, or more recently to Julia. Yet, with the ever impressive work of @FedericoPerini, @hkvzjal and others on the linear algebra module, I barely use python nowadays, at least for anything linear algebra related (which is probably 90% of what I do). The combo fpm
/stdlib
(possibly with bits and pieces by @jacobwilliams for polynomials roots or ODE solvers) have made things so easy that it is a pleasure once more to develop things in pure Fortran. And if my crazy ideas work, they now can be rapidly integrated into whatever projects I’m working on with almost no modifications to the code. No more “two languages” problem which was one of the reasons that go me interested in Julia in the first place.
My vision of where stdlib
stands today and what it could become
You guessed it, I love stdlib
. With the exceptions of a handful of low-hanging utilities (e.g. cond
, slogdet
, matrix_rank
or matrix_power
for instance), we can claim that it almost reached parity with numpy
. And that’s exactly how I see it at the moment: a pure Fortran and near feature-complete drop-in replacement for pretty much whatever I used to do with numpy
.
The natural next step forward is parity with scipy
. While a non-negligible fraction of scipy.linalg
is already covered by stdlib
, there are still many things missing (e.g. matrix functions, matrix equation solvers, utilities for specially structured matrices, etc). Note however that many features of the other scipy
submodules have already been partially implemented by various members of the community in their own repos. @jacobwilliams in particular made a hell of job at modernizing so many packs! Here is a non-exhaustive list of things that already exist and could eventually be integrated into stdlib
:
scipy.cluster
: I hardly used this module. Possibly there are some implementations available out there already compatible withstdlib
but I have no idea.scipy.constants
: I think most of it (if not all of it) is already covered by thestdlib_codata
module.scipy.datasets
: It provides access only to a couple of very simple datasets which I never ever used. It is probably very easy to emulate but I don’t think it needs to be very high on one’s priority list.scipy.differentiate
: @jacobwilliams got us pretty much covered with NumDiff. According to the README, the only major thing missing is the support for computing the Hessian matrix.scipy.fft
andscipy.fftpack
: Most of the core functionalities are provided by the modernized implementation of fftpack whose repo already falls under the umbrella for fortran-lang. I don’t think it’d take long to extend it as to cover the remaining things (e.g. convolution and differential operators mostly).scipy.integrate
: Once again @jacobwilliams got us pretty much covered with the modernized versions of Quadpack, odepack, and its fantastic rklib package or dvode.scipy.interpolate
: @jacobwilliams, our one-man team, has bspline-fortran, splpak, regridpack, pchip, and finterp.scipy.io
:stdlib_io_npy
already provides utility functions fornpy
and (soon?)npz
files. Support for Matlab and Matrix Market files might be useful as well.scipy.linalg
: my own repo SpecialMatrices tries to provide some specialized drivers for highly structured matrices such as tridiagonal, symmetric tridiagonal, Strang, Poisson2D, circulant, Toeplitz, or Hankel matrices. I also have routines here and there so compute the matrix exponential or solve Sylvester/Lyapunov/Riccati equations.scipy.ndimage
: I never ever used this module and I have no idea to what extent Fortran is being used for image processing.scipy.odr
: @HugoMVale already has a modernized version of ODRPACK.scipy.optimize
: By far one of the modules I use the most and what, I believe, is currently missing instdlib
. While I’ve just started to work on modernizing QuadProg for convex quadratic programming, @jacobwilliams (him again) has modernized many codes, including PQSP, OptGra, conmin, conmax, slsqp, NLESolver-Fortran, LSMR, LSQR, fmin, lbfgsb, PowellOpt, or fitpack for instance.scipy.signal
: I don’t know of any particular repo implementing these techniques, albeit a large number of them could be build on top offftpack
.scipy.sparse
: The core features are already partially covered bystdlib_sparse
(thanks to @hkvzjal among others) and there is an on-going effort by @kimala to implement sparse linear solvers (e.g. conjugate gradient, gmres) intostdlib
.scipy.spatial
: I never used this module that I have no idea what would be available in Fortran.scipy.special
: I seem to recall that someone already has some of these special functions implemented (notably the Bessel functions maybe).scipy.stats
:stdlib
provides some support for the uniform and Gaussian distribution, but it is nowhere as feature-complete as this scipy module.
This list turned out to be a bit more exhaustive than I anticipated when I started writing this post.
I’ve probably forgotten many repos and I apologize to whoever may be concerned. Nonetheless, it seems to me that reaching parity with scipy
probably ain’t as much of a daunting task as it may look at first. We would “simply” need a coordinated community effort to regroup most of these packages under the fortran-lang or the stdlib
umbrella (provided the owner of each repo is ok with it obviously). There might be some licencing issues that would have to be dealt with, and more importantly some (possibly heated) discussions about standardizing the API and translation to fypp, but from what I can see, a large part of the core programming is almost done.
I realize that it easy for me to say “we simply need this and that” when I myself don’t have as much time as I’d like to contribute. Nonetheless, I think that reaching parity with scipy
relatively rapidly is not only possible but would be a major milestone for the scientific Fortran ecosystem overall. I know for sure that, for many of my colleagues, that might be the tipping point for them to transition back fully to Fortran (notably for teaching but also for data post-processing for instance).
A few things that could make it easier for new users and/or new contributors
To me, the combo fpm
+ stdlib
definitely is the main selling point of this modern Fortran ecosystem, at least as a user. From a new contributor point of view, there are however a few things that may possibly push back long-time Fortraners: the use of fypp
, the very thorough process for PR in stdlib
(i.e. code + test + documentations + specifications) and the fact that contributions need to use the master branch based on cmake
rather than stdlib-fpm
directly.
Just to be clear, this is by no mean a critic. I totally understand why it is the way it is and how important and beneficial it is, but still, it was a little off-putting at first. My understanding is that most Fortraners are used to develop bits and pieces for their own applications, without necessarily putting too much emphasis on standardization, documentation, being able to handle different precisions etc. It certainly is the case when I discuss with my colleagues.
One possibility to alleviate this reluctantness could be for the fortran-lang Github organization to provide a template repo mirroring the structure of the stdlib
one (including the github actions) with a documentation as detailled as what we can get in the toml
file when running fpm new --full
. This could actually serve different purposes altogether:
- Make things easier for new contributors to understand the design choices of the repo itself and experiment with it.
- Provide a well documented and standardized repo structure that could be used to develop and iterate on functionalities too large or experimental to be directly included in
stdlib
initially. - Provide a well documented and standardized repo structure for major contributions that ought to be included in
stdlib
but cannot for licencing reasons for instance.
I don’t think it will be a silver bullet convincing everybody but it might be sufficient for some folks to make the first step toward contributing to the ecosystem. There probably are other actions one could take but I can’t think of easy ones at the moment and this post is already sufficiently long as it is.
A thriving community
To cut this long story short, I’m very happy with the community you guys have created and I’m eager to see what will come out of it in the next few years. As far as I’m concerned, fpm
, stdlib
, fortls
, this discourse have all contributed to renew my pleasure for programming in Fortran. I’m also looking forward to community-wide events such as the 2026 FortranCon which I’m seriously considering going to in order to meet as many of you as possible in person.
FORTRAN is dead! Long live Fortran!
PS: I apologize to @Beliavsky who keeps editing my posts for typos. I tried to track as many of them as possible but, for some reasons, my spellcheck is not working and I forgot my glasses today >.<