Some reflections about stdlib, where it stands and what it could become

Although I have been familiar with fpm and stdlib for an extended period of time, I’ve started to take a keen interest in this discourse and its community only a year-ish ago. In all honesty, I have to say that it have had a transformative impact on my overall Fortran experience and re-ignited my hopes and desires of what a modern Fortran ecosystem for scientific computing could look like. There are so many people to thank for this that I will not even try to come up with an exhaustive list because I’ll more likely that not forget too many of you still. You all know who you are and what we owe you anyways, so thank you.

Over the past year, fpm and stdlib have become absolutely critical components of my Fortran toolbox. So critical actually that, while I used to prototype pretty much everything with a mix-bag of numpy, scipy and possibly Julia until last spring, I now do so pretty much exclusively in Fortran directly. Trying a new idea has become as simple as fpm new my_project along with adding stdlib = "*" to the dependencies in the toml file and I’m good to go! This rejuvenated workflow is so simple and efficient that it largely contributed to my group and I being able to write new tools for our research that I wouldn’t have phantom about even just a couple of years ago.

With the one year mark of my involvement in this community approaching, I think it is a good time to actually take some time and reflect on a number of things, including:

  • Why have fpm and stdlib been so instrumental for me.
  • What exactly did stdlib enable me to do that I wouldn’t have been able to do (or with difficulty) even just a year ago?
  • My understanding of where stdlib stands today and how it could serve as the foundation block of the Fortran ecosystem for scientific computing.
  • What could be improved for new users or potential new contributors.
  • More broadly, what is my point of view on the overall Fortran community and how to help it thrive.

If you made it this far you’ve probably understood that it’ll be a pretty long read. Before we get started, let me however quickly introduce myself and give a bird’s eye view of the environment I work in and what I use Fortran for. I’m not doing this for shameless self-promotion or whatsoever, but simply because it might shed some light on where the potential biases and opinions I have come from.

Who am I?

I have a background in numerical linear algebra, convex optimization and dynamical system theory. I did my PhD between 2010 and 2014 on solving very large-scale eigenvalue and singular value problems arising in hydrodynamic stability theory to study the first stages of the transition to turbulence in three-dimensional flow configurations. It is during this period that I have first been exposed to Fortran, working with the massively parallel spectral element solver Nek5000 written in F77. Since then, I dabbled with Python and Julia quite a bit as well but always got back to Fortran.

Since 2017, I am assistant professor of Applied Mathematics and Fluid Dynamics in Paris. The lab I’m part of, DynFluid, is specialized in high-performance computing for fluid dynamics. Most of my colleagues (all Fortraners) perform very large simulations of high-speed flows in the presence of shock waves (think supersonic, if not hypersonic, flows with thousands or tens of thousands of processors). Others are more interested in developping innovative numerical schemes or computational solvers. One thing we are really proud of is dNami which takes as input a symbolic partial differential equation and automatically generates high-performance code to solve it. Nicolas heavily relies on SymPy for the preprocessing part and has to thank @certik for that again.

In contrast, I am more interested in model order reduction to obtain reasonably accurate but blazing fast surrogate models for multi-query problems such as aerodynamic shape optimization. My personal preference however goes to feedback control and flow estimation problems which heavily rely on numerical linear algebra and convex optimization. It is in this context that I’ve started to work on LightKrylov. The goal was to develop a Fortran-based library making use of abstract types to implement a variety of linear solvers and easy to interface with. We then built on top of that LightROM to solve very high-dimensional Lyapunov and Riccati equations which are ubiquituous in control theory. My interest in modernizing the quadprog solver also comes from there: now that we can obtain reduced-order models of very large-scale simulations, we want to solve the optimal/robust control in real-time and needed a really efficient solver for convex quadratic programming. Along the years, I’ve also work quite a lot with Steven Brunton from UW, Seattle, on sparse regression problems for data-driven identification of nonlinear dynamical systems and contributed to the creation of pySINDy, a Python package dedicated just for that and fully compatible with scikit-learn.

I think that’s all I have to say on this subject. Back to Fortran now.

I’ll try to structure this post by roughly following the bullet points I’ve mentioned before. I however often have a pretty erratic thought process so I apologize in advance for anything that may look inconsistent or out of place.

Why have fpm and stdlib been so instrumental to me?

I started my Fortran journey 15-ish years ago with a code handed to me written in F77, version-controled with svn and a whole lot of makefiles. And boy do I hate writing makefiles!
Don’t get me wrong though. Makefiles are great once your project is in a somewhat finalized stage but I find them horrible when you do rapid prototyping or while your project in under rapid development with a changing structure. This is where fpm and the ecosystem created around it have changed things for the better, as far as I’m concerned. No more fooling around with makefiles, cryptic error messages, problems linking external libraries and what not.
Testing a new idea is now as simple as fpm new my_project, a couple of lines added to the fpm.toml, a handful of minutes programming and fpm run. Boom, done. The drastic simplification of my workflow was the main selling point for me. Additionally, it made things so simple that I am almost done convincing my colleagues to reintroduce Fortran in their numerical methods classes. I’m not quite there yet but it’s damn close.

Although I mostly use its linear algebra module, stdlib has been equally important to me. While I’ve used lapack quite extensively over the years, I always had to check the online documentation to make sure I referenced the correct subroutine and its arguments. Once again, lapack is awesome and most of what I’ve ever done relies heavily on it, but it is a pain when you just want to try out an idea. This is where I usually turned to the numpy/scipy combo, or more recently to Julia. Yet, with the ever impressive work of @FedericoPerini, @hkvzjal and others on the linear algebra module, I barely use python nowadays, at least for anything linear algebra related (which is probably 90% of what I do). The combo fpm/stdlib (possibly with bits and pieces by @jacobwilliams for polynomials roots or ODE solvers) have made things so easy that it is a pleasure once more to develop things in pure Fortran. And if my crazy ideas work, they now can be rapidly integrated into whatever projects I’m working on with almost no modifications to the code. No more “two languages” problem which was one of the reasons that go me interested in Julia in the first place.

My vision of where stdlib stands today and what it could become

You guessed it, I love stdlib. With the exceptions of a handful of low-hanging utilities (e.g. cond, slogdet, matrix_rank or matrix_power for instance), we can claim that it almost reached parity with numpy. And that’s exactly how I see it at the moment: a pure Fortran and near feature-complete drop-in replacement for pretty much whatever I used to do with numpy.
The natural next step forward is parity with scipy. While a non-negligible fraction of scipy.linalg is already covered by stdlib, there are still many things missing (e.g. matrix functions, matrix equation solvers, utilities for specially structured matrices, etc). Note however that many features of the other scipy submodules have already been partially implemented by various members of the community in their own repos. @jacobwilliams in particular made a hell of job at modernizing so many packs! Here is a non-exhaustive list of things that already exist and could eventually be integrated into stdlib:

  • scipy.cluster : I hardly used this module. Possibly there are some implementations available out there already compatible with stdlib but I have no idea.
  • scipy.constants : I think most of it (if not all of it) is already covered by the stdlib_codata module.
  • scipy.datasets : It provides access only to a couple of very simple datasets which I never ever used. It is probably very easy to emulate but I don’t think it needs to be very high on one’s priority list.
  • scipy.differentiate : @jacobwilliams got us pretty much covered with NumDiff. According to the README, the only major thing missing is the support for computing the Hessian matrix.
  • scipy.fft and scipy.fftpack : Most of the core functionalities are provided by the modernized implementation of fftpack whose repo already falls under the umbrella for fortran-lang. I don’t think it’d take long to extend it as to cover the remaining things (e.g. convolution and differential operators mostly).
  • scipy.integrate : Once again @jacobwilliams got us pretty much covered with the modernized versions of Quadpack, odepack, and its fantastic rklib package or dvode.
  • scipy.interpolate : @jacobwilliams, our one-man team, has bspline-fortran, splpak, regridpack, pchip, and finterp.
  • scipy.io : stdlib_io_npy already provides utility functions for npy and (soon?) npz files. Support for Matlab and Matrix Market files might be useful as well.
  • scipy.linalg : my own repo SpecialMatrices tries to provide some specialized drivers for highly structured matrices such as tridiagonal, symmetric tridiagonal, Strang, Poisson2D, circulant, Toeplitz, or Hankel matrices. I also have routines here and there so compute the matrix exponential or solve Sylvester/Lyapunov/Riccati equations.
  • scipy.ndimage : I never ever used this module and I have no idea to what extent Fortran is being used for image processing.
  • scipy.odr : @HugoMVale already has a modernized version of ODRPACK.
  • scipy.optimize : By far one of the modules I use the most and what, I believe, is currently missing in stdlib. While I’ve just started to work on modernizing QuadProg for convex quadratic programming, @jacobwilliams (him again) has modernized many codes, including PQSP, OptGra, conmin, conmax, slsqp, NLESolver-Fortran, LSMR, LSQR, fmin, lbfgsb, PowellOpt, or fitpack for instance.
  • scipy.signal : I don’t know of any particular repo implementing these techniques, albeit a large number of them could be build on top of fftpack.
  • scipy.sparse : The core features are already partially covered by stdlib_sparse (thanks to @hkvzjal among others) and there is an on-going effort by @kimala to implement sparse linear solvers (e.g. conjugate gradient, gmres) into stdlib.
  • scipy.spatial : I never used this module that I have no idea what would be available in Fortran.
  • scipy.special : I seem to recall that someone already has some of these special functions implemented (notably the Bessel functions maybe).
  • scipy.stats : stdlib provides some support for the uniform and Gaussian distribution, but it is nowhere as feature-complete as this scipy module.

This list turned out to be a bit more exhaustive than I anticipated when I started writing this post.
I’ve probably forgotten many repos and I apologize to whoever may be concerned. Nonetheless, it seems to me that reaching parity with scipy probably ain’t as much of a daunting task as it may look at first. We would “simply” need a coordinated community effort to regroup most of these packages under the fortran-lang or the stdlib umbrella (provided the owner of each repo is ok with it obviously). There might be some licencing issues that would have to be dealt with, and more importantly some (possibly heated) discussions about standardizing the API and translation to fypp, but from what I can see, a large part of the core programming is almost done.

I realize that it easy for me to say “we simply need this and that” when I myself don’t have as much time as I’d like to contribute. Nonetheless, I think that reaching parity with scipy relatively rapidly is not only possible but would be a major milestone for the scientific Fortran ecosystem overall. I know for sure that, for many of my colleagues, that might be the tipping point for them to transition back fully to Fortran (notably for teaching but also for data post-processing for instance).

A few things that could make it easier for new users and/or new contributors

To me, the combo fpm + stdlib definitely is the main selling point of this modern Fortran ecosystem, at least as a user. From a new contributor point of view, there are however a few things that may possibly push back long-time Fortraners: the use of fypp, the very thorough process for PR in stdlib (i.e. code + test + documentations + specifications) and the fact that contributions need to use the master branch based on cmake rather than stdlib-fpm directly.
Just to be clear, this is by no mean a critic. I totally understand why it is the way it is and how important and beneficial it is, but still, it was a little off-putting at first. My understanding is that most Fortraners are used to develop bits and pieces for their own applications, without necessarily putting too much emphasis on standardization, documentation, being able to handle different precisions etc. It certainly is the case when I discuss with my colleagues.

One possibility to alleviate this reluctantness could be for the fortran-lang Github organization to provide a template repo mirroring the structure of the stdlib one (including the github actions) with a documentation as detailled as what we can get in the toml file when running fpm new --full. This could actually serve different purposes altogether:

  • Make things easier for new contributors to understand the design choices of the repo itself and experiment with it.
  • Provide a well documented and standardized repo structure that could be used to develop and iterate on functionalities too large or experimental to be directly included in stdlib initially.
  • Provide a well documented and standardized repo structure for major contributions that ought to be included in stdlib but cannot for licencing reasons for instance.

I don’t think it will be a silver bullet convincing everybody but it might be sufficient for some folks to make the first step toward contributing to the ecosystem. There probably are other actions one could take but I can’t think of easy ones at the moment and this post is already sufficiently long as it is.

A thriving community

To cut this long story short, I’m very happy with the community you guys have created and I’m eager to see what will come out of it in the next few years. As far as I’m concerned, fpm, stdlib, fortls, this discourse have all contributed to renew my pleasure for programming in Fortran. I’m also looking forward to community-wide events such as the 2026 FortranCon which I’m seriously considering going to in order to meet as many of you as possible in person.

FORTRAN is dead! Long live Fortran!

PS: I apologize to @Beliavsky who keeps editing my posts for typos. I tried to track as many of them as possible but, for some reasons, my spellcheck is not working and I forgot my glasses today >.<

30 Likes

Beautiful! Honestly, almost like reading my own thoughts!! When I stumbled upon the community at the end of 2022 I couldn’t believe it, and when joining in 2023 I just felt the moral obligation to give back to say thanks to the huge work by so many here!!

If it can help, when I started contributing to stdlib I also felt the pain here, that’s why I proposed a python script to build and deploy directly from root (main branch) with fpm by

python config/fypp_deployment.py
fpm build --profile release

Ideally, fpm should (could) handle this by integrating a bit of python that can be executed externally to handle this pre-preprocessing with fypp. That would make this specific pain point go away.

2 Likes

The missing link as originally envisioned is an official repository, and a nursery of independent projects marked for progression into stdlib. A alpha version of stdlib could then be stdlib plus dependencies that would include copies of the selected candidates.

It was envisioned that there would be a natural progress from public fpm projects brought forth that would be communally worked on, and then a natural selection of features from competing proposals that would become a repository version. Popular repository entries would then be migrated into stdlib. That pipeline has not really emerged as expected as the path for inclusion into stdlib.

In some ways it has become netlib 2.0, but in a good way. Many packages that were being lost or not being ported to take advantage of modern architectures have been modernized. New scientific packages are much more likely to become fpm/stdlib components than to become netlib donations.

That is all good, but as you mentioned the steps to go from a single Fortran-centric developer who may have never used markdown, github, CD/CI, PRs, git, or fpm can be an insurmountable hurdle.

I started a template for just setting up a basic fpm/github project several years ago myself. There have been other more recent projects along that line, but perhaps the stdlib project itself should set up a nursery for (vetted) projects to help create a community-supported environment where the main thing contributors need to supply is just the Fortran code? So there could be a number of “empty” projects available at “nursery · GitHub” that look something like
GitHub - urbanjost/easy: steps to setup a github repository with fpm, and github actions including ford(1) documentation and unit tests that stdlib administrators owned? candidates like OdePack could be copied there and worked on as community projects with the goal of incorporating them into stdlib or a communal “repository” version?

I wonder how many people have something they desire to be an “official” package but are stopped by all the baggage the prerequisites and required infrastructure bring? I know of some interesting code supported only with vim/make/Fortran compiler myself where people have mentioned it is too daunting to change to git/ford/fpm/github/markdown/CD-CI/fpm … without help in order to contribute.

2 Likes

@loiseaujc excellent!

We are going to do a strong push with fpm and stdlib this year, we will start the discussion about the roadmap very soon, for now you can get in touch with @hkvzjal, @FedericoPerini, @jeremie.vandenplas, @ivanpribec and others, who are very active there.

3 Likes

You make some very good points, and a lot of what you write also applies to/resonates with me. I haven’t used stdlib as much, but I love it so far and really think this is one of the important cornerstones to progressing the state of Fortran.

I hit a dead end fairly quickly with stdlib here, which prompted me to look into it more to see if I could contribute to the stats part of stdlib in particular (I’ve done quite a bit of statistical modelling in Fortran in the past and am in the process of cleaning up some of the code used for publications and putting it on github). However, this is where the some of the hurdles you and @urbanjost mention come in (fypp in particular for me).

While well-justified and not insurmountable, they do mean having to set a larger chunk of time aside to learn x-y-z before contributing code that may already be well-functional and sitting around somewhere.

Thank you for your testimony @loiseaujc! I don’t have much to add to the discussion besides that I think you’ve made two very important points: that if something is moving forward for Fortran, it is for two reasons:

  • things “just work”
  • things are easy to use/setup as a programmer.

I feel you when you say that sometimes you’d like to see things progress faster, but at the end as @urbanjost said stdlib should contain high-quality, production code, so sometimes it’s a better approach if we all cook up our own “nurseries” (or as I call them, “quarries”) and then work to get stuff into stdlib and fpm when it’s production-ready. @jeremie.vandenplas and @hkvzjal will confirm, that the great teamwork we did putting linear algebra into stdlib has been a combination of a production-ready codebase (nothing less than LAPACK, the most battletested repository in history), a relatively easy API (who doesn’t want it to be similar to numpy/scipy linear algebra APIs?) and long hours of code review.

This last point is very important and this year has been a great learning process for me as well: it’s not only about the code alone, but about the people who need to help read, validate and ultimately improve it and merge it. Common mistakes I used to make are e.g. putting together too much code at once, too much verbosity, or changing it too much during the review, these mistakes will ultimately crash the reviewers, who will be in trouble reviewing it :slight_smile:

4 Likes

Your post reminded me of what @milancurcic said at FortranCon 2020 (and perhaps also on a couple later occasions):

[Fortran] should feel like play, not work.

The full set of slides can be found here.

4 Likes

I would like to add my voice to those who list fypp as an obstacle for using stdlib. My personal philosophy on software development projects is they should be as self contained as possible. All libraries etc should “feel” like they are an intrinsic part of the language and to the maximum extent possible function that way. To me that means making accessing and linking them into your project as invisible to the user as possible. I developed a very large distaste for projects that (over) rely on 3rd party libraries when I was asked to install and maintain a DoE C++ code on one of the DoD HPCMP systems. This base code would compile O.K. but getting the 3rd party code it relied on to build and link correctly was a nightmare. In an ideal world, the vendors would implement native support for stdlib in a manner similar to what you see with numpy and scipy. I doubt that will ever happen though.

1 Like

Fypp is a single-file Python script. If downloading it is an obstacle, we can probably just add it to the project. The use of Fypp should not scare potential contributors away. It is just the means to realize function overloading. You are welcome to suggest/contribute interfaces for a single kind, and the stdlib maintainers can help with the templating if that becomes an issue.

5 Likes

While I do understand and share to some extent your point of view @rwmsu, on this specific issue we have to face the fact that Fortran currently does not have anything close to generics/templates. We know that a group is working on that, but even if it were to be finalized and approved tomorrow, it would take years to be usable in a robust and wide manner from several compilers.

I have to say, fypp being a simple python wrapper, that does give you back the actual plain Fortran files, it barely adds a small learning step but it honestly opens many possibilities as it truly enables Fortranners of today (not some hypothetical 20XX in the future) to code-once and get all your kinds/types in one go. I learned about fypp when joining this community and sat one weekend to get used to it, if one is fluent in Fortran and have some fluency in python, fypp comes naturally.

I totally back what @ivanpribec said. If one wants to contribute to stdlib but doesn’t feel comfortable (yet) with fypp templating, propose a plain Fortran version and the maintainers will certainly help get it in shape! Do you want to just get stdlib running with your project? the stdlib-fpm and stlib-fpm-ilp64 branches are there for that, no fypp in the way.

It also comes down to this point! We should not forget, that we are all biased by our own learning and professional paths. We can try all we want to get as broad a view as possible, it will never suffice. But working together we help each other cover for those blank-spots. If one knows that the specific contribution has some sort of “genericity”, then it can be worked out.

Let me stress it again, stdlib does not depend on fypp to build or run, but it does need it as of today for its development process, and I have to say thank you to @aradi BTW for crafting it out!

1 Like

There will be many algorithms coded in Fortran that are not part of stdlib. My list of codes does has broad categories such as optimization, but the Guide to Available Mathematical Software (GAMS) has sub-categories that are much narrower, for example unconstrained optimization of a smooth multivariate function where the user provides (a) no derivatives (b) first derivatives (c) first and second derivatives. Unfortunately GAMS is not updated for the most part, mostly referring to codes from Netlib, IMSL, and NAG (the former mostly Fortran 77, the latter two proprietary).

An updated GAMS would be nice. I asked grok to create updated lists for two categories as an experiment, and it did a decent job. Ideally human subject matter experts would take such auto-generated lists as a starting point and edit them.

R has 20,000+ packages. It has Task Views in which domain experts classify and describe the packages in their field that they think are important. Ideally Fortran would too.

I think I did not express myself correctly. I have absolutely no problem with the pace at which stdlib is evolving, quite the contrary actually, and for so many reasons. As you’ve said, stdlib should contain high-quality/production code. It’d be a shame to realize in one, two or three years that a particular piece of stdlib that was rushed to get merged and is instrumental for the rest turned out to be poorly designed and would need an astronomical effort to get its design corrected along with all the downstream consequences. More importantly, working on stdlib is a purely benevolent effort. On a personal level, I teach ~150 hours per semester, I have to mentor PhD and/or Master students, plus anything research-related (which I guess many of us are familiar with) along with having two young kids to look after. I can only spend so much time, and I suppose many others are in the same situation.

In all honesty, I’m actually uber impressed by how far stdlib went since I’ve first heard about it. What I meant by

is not quite “rapidly” in the sense that it could be done in a matter of weeks or a few months. What I meant is that I can actually visualize it happening (possibly in two to three years), something I wouldn’t even have thought possible a mere two years ago.

On the topic of having one’s own nursery:

I totally agree with that. What I was suggesting was to create a template repo that would actually make this process even easier while enabling the project to have a life of its own. Take a plausible optimization module as an example. The repo could mirror the structure of stdlib. People interested in working on such algorithms could contribute right away and, more importantly, use it directly in whatever application they’re working on rather than having to git switch to a particular branch of stdlib which may not contain other useful features that have been merged in the main branch. Once they deem the package ready for integration into stdlib, the process would be nearly as simple as copy-pasting and the transition from this experimental package to the stdlib almost entirely transparent to the users. There are no time-constraints. And if the package eventually branched off too far, so be it and it’ll still have a life of its own.

That is kind of the idea I had in mind. There are some features that we all know should and will be included eventually in stdlib. They may however not be in the top priority at the moment because the stdlibers are focusing their efforts on another lower-level and possibly more critical part, or simply because the overall stdlib structure is not yet entirely ready for that. Yet it might still be beneficial to the community to have already access to it, and having it under the umbrella of fortran-lang rather than in one’s personal repo would certainly be beneficial. I think what I have in mind is pretty similar to the original idea of the scikits in Python, but I may be wrong.

I couldn’t agree more. It took me only an afternoon or so to get familiar with fypp, at least to a sufficient level. A couple of days later, I had it integrated into my workflow for LightKrylov, which opened up some new avenues for me. The only thing I’m missing is a proper syntax highligthing when working on an `fypp-file in neovim (but I guess it is pretty much the same in other editors).

3 Likes

Thanks for this explanation, @hkvzjal. As someone who’s also comfortable with Python, this probably also just means “one weekend” then. That is not immediately apparent when simply going straight to the code (1st thing I do usually), so this is very reassuring and motivating.

Perhaps it’s worth introducing this (at FortranCon or independently of that) in context of contributions to Fortran projects to demonstrate that these aren’t any real hurdles?

1 Like

Let me put in a good word for fypp, from someone that actually despises python.

Most of the work in developing a new tool for stdlib is designing a good API and handling multiple data types. But this work is pure fortran thinking. If you decide to build a new code for stdlib or to port an existing one into the library, you will see that there is a lot of work before getting into the necessity of a preprocessor. You could even postpone it to the very last, you can organize your code and develop it with a minimal set of routines to shape it. If it’s only this last step that is preventing people from porting their stuff into stdlib, please come discuss how to do it here on the discourse, I’m pretty sure someone will help you.

About the cumbersomeness of relying on third party libraries, I don’t think this is a problem for fortran+fypp. From a quick look into the fypp source it looks like pure python, and python is probably installed on every computer of the world. This is not the same as having to rely, let’s say, on zlib+slib+HDF5+netcdf-c+netcdf-fortran+XIOS2 in this specific order with absolutely the same compiler just to run NEMO (example from the production ocean model currently used in multiple facilities like the MetOffice, MeteoFrance and many other research institutes).

I do agree that the syntax of fypp is unpleasing (but also I recognize that it must be hell on earth to design it, so respect and many thanks to @aradi), but is it really worse than having cpp macros and generic.h90 files?

   !!----------------------------------------------------------------------
   !!
   !!                  ***   lbc_lnk_call_[234]d_[sd]p   ***
   !!                  ***     load_ptr_[234]d_[sd]p     ***
   !!
   !!----------------------------------------------------------------------
   !!
   !!   ----   SINGLE PRECISION VERSIONS
   !!
#define PRECISION sp
# define DIM_2d
#    include "lbc_lnk_call_generic.h90"
# undef  DIM_2d
# define DIM_3d
#    include "lbc_lnk_call_generic.h90"
# undef  DIM_3d
# define DIM_4d
#    include "lbc_lnk_call_generic.h90"
# undef  DIM_4d
#undef PRECISION
   !!
   !!   ----   DOUBLE PRECISION VERSIONS
   !!
#define PRECISION dp
# define DIM_2d
#    include "lbc_lnk_call_generic.h90"
# undef  DIM_2d
# define DIM_3d
#    include "lbc_lnk_call_generic.h90"
# undef  DIM_3d
# define DIM_4d
#    include "lbc_lnk_call_generic.h90"
# undef  DIM_4d
#undef PRECISION

(Another example from NEMO)

In my list of favorite languages, even the Texas Instrument Basic comes before python, and fypp is no better than python, it makes the code look weird, it does not preserve the indentation I want and generally I don’t like it. But it is very easy to learn, it does an incredible job and its versatility is amazing. I doubt you will be disappointed of its capabilities.

1 Like

I remember reading that the original LINPACK (or perhaps LAPACK) authors also used a preprocessor to generate the procedures for single, double, complex, and complex-double precision – the sdcz convention. I’m not sure if they used M4 or another macro processor of that era.

Since Fortran (still) lacks built-in support for this kind of code generation, we’re forced to use tools like Fypp out of pragmatism. This complexity remains hidden for callers of the library through the use of generic procedures introduced in Fortran 90.

I understand that’s true for people doing development but why are the majority of the files under the stdlib github /src directory in fypp format. Can’t you just have a separate development directory or repository for the fypp code and make all the source files facing the user in stdlib github /src just the translated Fortran or at least have another directory that contains the translated source (ie just real Fortran). I’ll admit I’m probably missing something here but as it stands now it appears to me you have to clone or download the github repository and run fpm or cmake etc which I presume runs python to generate the kind specific versions of each code and builds a library containing all of stdlib. What if I just want one particular file in double precision. With the translated source available without running fpm or python, I can just download that file and its dependencies (something you use to be able to do on Netlib etc) and integrate those in your code base.

1 Like

You can find the preprocessed code in the automatically-deployed stdlib-fpm branch: GitHub - fortran-lang/stdlib at stdlib-fpm. This is the branch which can be used as an fpm package as @hkvzjal has mentioned.

I guess we could think about organizing the code the other-way round. Have the main branch be clean Fortran-only, and have Fypp in a dev branch.

Regarding double precision - If I’m not mistaken the stdlib-fpm branch only contains single and double precision; if you need 80-bit real or quad precision, then you will need to invoke CMake yourself.

Regarding one file - there is some interconnectedness among stdlib modules; they aren’t completely independent. Making the modules stand alone would introduce code duplication (e.g. for error handling, helper functions and so on), so historically this just wasn’t pursued very strongly although it would be desirable IMO. In one case I remember a file had to be split because otherwise it consumed too much memory during compilation (statistics procedures for arrays ranks > 4).

1 Like

I would support this for no other reason than people who just want Fortran code looking at all the fypp stuff and are not familiar with stdlib thinking “what the heck is this. I thought this was a Fortran code repository”

1 Like

I honestly thing this makes a lot of sense, as this is what I do on my own branch so that I can open and work in Visual Studio on Windows. The fypp license also allows it. I suggested it in a PR last year, but it got shot down, as the maintainers weren’t in favor of it at the time.

Given the current LLM capabilities, I wonder what would be the cost of developing a Fortran implementation of fypp as part of stdlib? Imagine the impact! It could be integrated anywhere including fpm. I believe it would be easy to coordinate with @aradi on the supported features and versioning.

I also started off slowly because I didn’t want to pollute Fortran with pre-processing directives too much, but the achievements on stdlib show that the benefits vastly outperform the cost imho.

1 Like