Running on computer without fpm and other questions

Hello. As my largest Fortran codebase gets larger, my Makefile gets more complicated, so I was thinking of moving over to fpm before it gets too large to handle. I was able to get it to build the main simulation file pretty easy, so that was nice.

My first question is whether fpm can generate something like a Makefile so I can easily give my code to someone else so they can run it without installing fpm. I would have fpm on my machine and give them the generated instructions to run it.

Second, would there be any potentially big features I am missing out on using fpm instead of Makefile? I don’t want to develop my code for a while in fpm just to figure out I can’t do something I need then have to spend the next day reimplementing Make into my code.

One of the initial goals of fpm(1) was to be able to generate make, cmake, … interfaces but I do not see where it yet does. It can generate a JSON or TOML dump of the model, which if complete should be able to be read by a plugin that could generate a makefile; but I do not see any recent activity. So at least at first glance it looks like the hooks are there but not (currently) being used to generate a make/cmake interface.

There are makefile generators available that might let you automatically generate a makefile. The most complete list I know of is at

and it is generally always good to look at or contribute to the Fortran Wiki

Yes, that is the goal, we have an issue open for it here: Have CMake and Make backends · Issue #69 · fortran-lang/fpm · GitHub, however nobody has implemented it yet.

If you have time to help, that would be awesome. We need to implement these backends.

2 Likes

A dump of the model in a standard format was a missing component for more powerful plugins in general. Was just looking at how complete fpm build --dump $FILENAME is, as if everything is there it seems like a natural to make a “fpm-generate” plugin that at least makes a gmake(1) input file.

I am looking for something that already exists that reads the dump file in and uses it; as if that exists it is half-done.

An alternative not mentioned above is to use fpm to make a single-file distribution, which basically eliminates the need for a build system if all the files are free-format. That might be an option for the OP as well, particularly if C/C++ files are not involved.

I find people are generally willing to install the single-file fpm.F90 file as it just requires a Fortran compiler; and then use fpm to install projects; or have fpm already – that is a good development; but I do not see the need for a make/Cmake build setup as well.

A project using registered packages and external dependencies seems like it might be problematic.

Seems like I am missing something, as the model dump seems like a big step forward but so far I do not see anything using it (?).

So not sure where I stand on actually producing the backend as still sorting out just where it stands. If anyone already has something reading the dump, has more info on the dump, or is working on this already I would be interesting in hearing about it, as there are a few other possible plugins I have interest in, and once one using the dump file exists I think others could leverage that.

So definitely will scope it out, not sure where that will lead though.

To solve this issue, use the internal model representation and write a backend that generates makefiles. If you want to completely separate it, you can also just dump the model to a file, we have a printer for it, and then load it separately (say from Python). However I think a Fortran implementation as part of fpm would be better.

I am thinking Fortran but separate as a plugin. It looks so far like fpm build --dump is calling the printer and generating a file. There are TOML and JSON libraries available, and the TOML interface library is already an fpm dependency. So creating one plugin using that method provides a skeleton for others that have been discussed.

For starters I think I will try that approach and do something simple like set all the FPM environment based on the invocation and then start a subshell so all the defaults are set until the shell is exited. . Being in Fortan if it looks like the interface generator should become part of the monolithic core of fpm little effort is lost as all but the read of the dump should just be code that is moved to and called from fpm, but it frees up prototyping from needing to be synched with fpm itself.

One of the test programs (just looking at it now) looks like it might be doing something close enough to that to be reworked into a prototype for just loading the model from the -dump output.

But I also see the printer from fpm build --show-model. Not quite sure what the pros and cons of the two printers are a the moment but thinking the standard JSON/TOML files are preferred as they are using a standarized (well, pseudo-standardized) format (?)

1 Like

I have found it extremely easy to build and deploy the fpm in any new machine I’ve gotten access to, it is even easier than installing cmake which some computers still lack or the versioning is off.

I really really like the FPM but, for example, I can’t use it in one of my main apps because it relies on many compiler flags for different files, legacy things, and bad fortran overall. Working towards modernizing it and allowing it to be built with FPM.

Although it would be nice to have a Make and CMake backend, I am not 100% sure it is needed given it is SO easy to install the FPM.

My two cents.

3 Likes

Per the original post, would a single-file release work for your code? If so, you can use a script that calls your compiler (as discussed in previous posts) in conjunction with fpm that can be useful for dis tributing Fortran code. That is one of the methods fpm(1) itself uses as an option for people to initially install with.

If any, a Ninja Backend in fpm would be better. Cmake in itself uses backends such as Unixmakefiles, Nmake, visual studio, Ninja… Ninja has the advantage of being portable and fast, really easy to install also.

3 Likes

I find that generally true myself for my own projects but find some people using those projects have a strong desire to integrate them into Cmake and gmake. It is interesting to me that in most cases they could do an “fpm install” of my projects and just use them as libraries.

Perhaps the irony is they do not want to deal with one more piece of infrastructure as they are already having issues keeping their compiler/jenkins/python tools/cmake/git/CVS/… build system working and so they are resisting using anything else even though it is the one that is the lowest maintenance and lowest risk (you need a Fortran compiler and have the source for the tool and that is it).

But I do see a demand for Cmake integration in particular. And as described the plugin skeleton could be used for a lot of other plugins, I think…

Having experience with managing such software on HPC systems, I’ve regularly come upon projects that offer Makefiles for the configuration and compilation steps.

In most cases, it’s a nightmare to follow the documentation, edit the correct flags at the correct places with the appropriate paths, then pray that other external dependencies don’t need their own makefile edits.

Build systems should replace the need for makefiles completely. Case in point, autotools generates a non-human-readable makefile automatically, occasionally ~16k lines long.

My favourite process for Fortran codebases that need to be developed and run both locally and in HPC systems (and don’t implement a build system like cmake or meson already) is to:

  • migrate to fpm: As you point out, it’s very straightforward in many cases
  • manage a Python virtual environment: In it’s simple form it only requires Python, and is just python -m venv .venv; source .venv/bin/activate
  • install fpm (and other tools) on the project’s virtual environment: (.venv)$ pip install fpm
  • No other changes required: (.venv)$ fpm build

I feel that this is easier to manage than any makefile. It follows modern standards and (until now) it’s completely portable within Unix. I believe only slight changes are needed for Windows.

Also, fpm.rsp is not be underestimated. Trying to get a makefile to support a) multiple compilers, b) multiple OS systems, c) multiple modes of compilation (release, debug), d) differing compilation flags is downright silly.

I’ve also created a project template for such projects to make migration even easier.

I hear you about potentially finding out features that fpm cannot support. It’s always a risk.

1 Like

Yes, you can add an option fpm build --show-model-json (or toml, or whatever you prefer). Internally it’s just a simple printer of the model data structures.

You might find that fpm doesn’t put (print) everything in the “model”, in which case that’s a bug that we need to fix (just report it if you discover any issues). The idea is that the model (that you can print in any way you like) contains everything that fpm’s backend use to build the project, so by definition you can create a cmake and make backends, there is no other information that you need.

1 Like

I wrote the whole JSON/TOML model serialization feature @urbanjost so do feel free to ask for help! Glad to receive feedback on it, GitHub issues, etc. It was so tough to get it into fpm but all the information we need to build Makefile / Ninja / others backends is there. Great to see it can be useful!

The API is very simple: just run

fpm build --dump output_file.json
fpm build --dump output_file.toml

and fpm will decide the output format based on the requested file name.

the model dumps to standard JSON/TOML, can be parsed by any TOML/JSON package.

The dump serialises fpm_package_t, i.e. the whole fpm memory. Each class that makes it up is tested in the CI, so we’re 100% sure that it will always contain all the information even as fpm is added new features.
For build systems, the source tree, the external dependencies, the source file description, and the compiler flags are there. For example, here’s the output I have for the fpm 0.10.1 package:
JSON TOML

@hkvzjal My original idea was that fpm would just pass the JSON to a linter, or another plugin, and give information on all dependencies. I think all options are open, including adding functionality for generating Makefile/Ninja makefiles directly from within fpm or from external (Python) tools.

1 Like

It is straightforward for modern fortran projects that don’t rely on weird compile time shenanigans. Lots of legacy apps are a pain in the neck and not necessarily adaptable to the FPM :frowning: I know from first hand experience.

This is great!! thanks for doing this.

1 Like

Coming back to OP’s second question I feel it has not been addressed directly: I would say that fpm per-se does not impose hard restrictions on your code base. It is there to help organize, build, test, deploy and basically make your life easy. So far so good. Also, nothing is impeding one to have a fpm.toml a CMakeLists.txt, custom made Makefiles, etc config files to enable building ones projects with multiple tools. Great.

Now, as of now I see 3 features missing in fpm that are not game stoppers but would definitely be important to address, I’ll list in terms of my own biased priorities:

  1. Native support for multi-platform macros.

Most of these come from OS specific issues (Windows/Linux/macOS). Of compiler specific extensions and/or limitations. So if one wants his code to be compilable under different platforms and with different compilers, C-preprocessing is required. Fpm enables defining macros in the fpm.toml but those are for a single use case. I know there is a PR somewhere in fpm about this but so far I think it is still on-hold.

  1. Building shared objects: as of now it is not supported, there is a PR here feat: init shared library support. by arteevraina · Pull Request #1050 · fortran-lang/fpm · GitHub so hopefully this will come to be soon!! Really looking forward.

  2. Parallel build: I mentioned using Ninja as a backend because it is extremely fast and easy to plug with CMake. Now, if fpm managed to build as fast as Ninja, I wouldn’t care as much. And since Ninja+Fortran+Windows has a hard limit on the max absolute path length for source code files (129 or 139?) If fpm builded as fast, then it would have a sure advantage here.

1 Like

Thought I would jump in here with a couple of questions about the current status of fpm. A couple of years ago I tried to use fpm to compile a somewhat large project (that I’ve put on hold for the moment). It has about 100 source files spread out over about 25 subdirectories under a src main directory (under Linux). My initial attempts to use fpm failed due to it not being able to correctly resolve some module dependencies. I thought at first it might be due to a circular module reference but my default make file build system never encountered a problem since I was careful to specify the required order of compilation. So would it be worth my time to try again with a more recent version of fpm or does fpm still have issues with large code bases not in a single directory? fpm appears to work great for relatively small to medium builds when all the files are in one directory and its definitely worth the effort to set up your project to do an fpm build if thats the case. Also, when was the last time the online documentation was updated. The last time I looked some of it appeared to be out of date.

The dump opens up possibilities. That is great. I have gotten the beginnings of a reader but wondering fthere are example programs that display all the dependencies, or list all the compiler options already? One of the issues with creating more powerful plugins for fpm is access to the model from an external process which this dump and/or the --show-model switch look poised to solve, but it would be really useful to see some examples of converting the model back to fortran types. I am not normally a heavy JSON/TOML user having used my own file format and NAMELIST and hdf4/hdf5 before they became popular so I am probably making mistakes. Reading in the table and then getting all the filenames along with the module names they generate and the names of the modules used looks to be giving me the basics for creating *.o: dependency lines which should allow for generating basic interfaces for self-contained src/ directories. And include information is availble too, but so far things are not clear on what to do with dependencies. The primary function that makes fpm(1) attractive is replacing the need for make-like tools; but working with a repository and remove packages from source is beyond simple make usage (although makes’ simple interfacng with system commands makes it possible by calling wget, curl, git, … . fpm is very weak at leveraging other commands. That obviously has a good side but can be limiting.

Unless something got lost @lkedward made fpm work in parallel as long as it is built with OpenMP early on.

I build my own binary but it works for me.

:thinking: I was not aware of this one, how do you set a parallel build? Using the omp_num_threads environment variable? Or is there the equivalent of -j n ?

The way I’d do this in Fortran is to use fpm as a dependency to the plugin fpm package, so you don’t have to redefine fpm’s internal package derived type (it took me a long while to understand it lol).

Because all fpm derived types are now serializable_t, they can be fully loaded/dumped to JSON. For example, just load the fpm_module_t from file:

use fpm_model, only: fpm_model_t
implicit none

type(fpm_model_t) :: my_package
logical :: error

call my_package%load(file="model_dump.json", error, json=.true.) ! if JSON input
call my_package%load(file="model_dump.toml", error, json=.false.) ! if TOML input

That will load everything back into the fpm_model_t structure.

PS I’ve never used fpm as a dependency to another package, but I don’t see why it shouldn’t work!