New project: Should I use fpm? Meson? stdlib? etc

Personally, I find Makefiles work great for small projects. It is a lean and minimalist program. Basically it’s just a task scheduler. When it comes to using Make for Fortran projects, the biggest problem is lack of built-in module dependency resolution. It’s true there are a few public codes which can do this for you, but they tend to have some weaknesses.

Even if you are using fpm, CMake, autoconf, or anything else for your Fortran code, you can still use Make for other purposes. Things I’ve used it for: downloading data using curl or wget, building the documentation, executing various scripts (Python, Perl, bash…), exporting figures with gnuplot or Asympote, building Latex documents, doing conversions between image formats (imagemagick), converting images to animations, post-processing simulation results and more.

Make is not just a tool for building executables, it’s more like a custom book of recipes :man_cook: , or a multi-purpose toolbox :toolbox:. Sometimes a script will be enough, but when you have logical dependencies between tasks, make has the advantage that it allows you to iterate upon individual steps in isolation.

1 Like

@danielc thanks for your excellent questions. Others already answered in the same spirit, but I want to reiterate our long term strategy:

Projects like cmake and meson can’t fundamentally provide the robust and easy to use experience for Fortran, for (using your words) “Scientists like myself are famously bad at software engineering.”. But fpm can.

3 Likes

Thanks for the extra info.

I’ve seen Fortran codes that rely on the C preprocessor for model configuration (Croco is just one example). Personally, I find this design questionable, for a few reasons:

  • By having optional modules, the compilation step becomes part of your scientific research cycle. IMO, having to edit a configuration file and recompile the code, just to turn on a magnetic field, is a bad workflow. An analogy would be having to open your car engine just to turn on the headlights. It’s an extra burden for the user. Imagine having to recompile your browser, just to enable dark mode. When possible, software should be reconfigurable dynamically, meaning the options should be exposed at run-time.
  • It’s easy to lose track of which optional modules were included in an executable. Say you’ve written your manuscript, and your supervisor or a reviewer asks you to rerun a simulation with a few different settings. Is my old executable still there, or did I already over-write for some reason? Which modules did I enable in the first place? Oh no, some modules from the HPC environment have changed, and now my build is broken… Something which should have been as simple as a flip of a switch, snowballs into a whole afternoon of work.
  • The cpp-like preprocessors provided by Fortran compilers tend to slightly differ in their behaviour which can be a source of friction. One reason to use the C preprocessor could be if you need to share settings between C/C++ and Fortran source files, but this can also be fragile. You could use a different preprocessor such as fypp, but it becomes an extra step in your build process, and doesn’t change the previous issues.

So what are the alternatives?

  • Perhaps the optional modules have different (and unrelated) purposes, and you can split the project into multiple executables with a single well-defined purpose.
  • If the modules are meant to enable different behaviours, use callback functions (procedure pointers) or polymorphism (classes) instead. Bind the optional modules to command-line flags, environment variables, or read them from a Fortran namelist or TOML config file. Need to share your configuration with a peer? No problem: just send them the command used to run the program or your entire config file. That’s much easier then instructing them to rebuild the software.

Obviously, there are situations where the preprocessor is the right approach, including

  • compiler-specific and OS-specific behaviours
  • hardware-specific optimization,
  • instrumentation, profiling, etc.
  • switching between different libraries (e.g. use FFTW instead of FFTPACK)
  • libraries exposing preprocessor options

Of course there are always valid exceptions. Coming up with an easily configurable and extensible design can be tricky. But there is a reason why it’s called software - it’s “soft” and malleable. Unlike the hardware, which is literally set in stone, you are free to change the software as your requirements change, and your understanding of the problem improves.

2 Likes

Not really relevant, but how about a Fortran library that will let you do that (and much more) in your own programs, and with zero need for annoying stuff like //c_null_str etc? If the answer is yes, stay tuned. I have one ready since October, and should be released in a week or so. I still have to finish the documentation (which is easy but boring.)

3 Likes

Yes and yes. Omg, I’ve experienced all of the above. I’ve had to recompile just to turn on a module. I’ve lost track of which file has which modules. I’ve deleted an old executable that I shouldn’t have.

It’s crazy how this is apparently considered “normal” in my community. I kept thinking that there has to be a better way.

This is it. This sounds like the right solution. I can’t split the project into different executables, but some sort of pointer or polymorphism could work. The program should just pick modules based on a config file. Ok. I don’t know what a callback function is. I know OOP but I haven’t learned it in Fortran. So I’ll have to do some reading.

FYI, the old code I have literally duplicates every file (e.g. magnetic.f90nomagnetic.f90) with a “dummy” version with blank routines. Part of the job of the “build system” is to pick one file or the other. Ugh.

Thanks for the help!

1 Like

How about

Simply Fortran from Approximatrix? Try it free for 30 days

It costs some money but I think worth it.

Around three years ago I started Fortran. That time I was wandering from one IDE to another. But SF proved to be a good choice and I have been using it since then.

1 Like

Personally if your program does not have complex dependencies, makefiles work the best. People complain about the different makefile parsers but I’ve never had issues with making the makefile be simple and generic. Most projects I just adapt from this version for C and Fortran compilation with ifort and gfortran:
https://code.usgs.gov/modflow/mf-owhm/-/blob/main/makefile

or this version for Fortran only:
https://code.usgs.gov/fortran/bif/-/blob/main/makefile

(Note that intel ifort has been deprecated and will be replaced by ifx at the end of this year).

If you ae working on Windows with Intel oneAPI, I find the overhead of setting up a Visual Studio Solution/Project is worth the effort for the compilation and debugging, then I just use the makefile for linux. Note that Visual Studio and oneAPI now offer no-cost versions. I also just screen capture the build order and copy and paste it into my makefile to get get the compilation order correct. Most projects don’t have drastic compilation order changes, so its easy to just add to a makefile each new file as its created. The hard part is the initial setup.

You may also want to check out Batteries Included Fortran for its Random Number generators.
https://code.usgs.gov/fortran/bif

mirrored at

1 Like