These are great news for anyone developing cross-platform software with multiple dependencies.
I am curious if fpm could learn/use some of the methods employed by Spack, specifically on the reuse of builds now that the registry is being actively developed.
Spack is actually not new in this regard, and one of the aspects of how fpm separates builds (hash of compiler options) I borrowed from stack.
What Spack brings to the table is trying to manage the dependencies much further down the stack (i.e. compiler, network interface libraries, etc.). But of course it’s not really new here either, Nix was already doing that and reusable builds.
IMO Spack’s real distinguishing factor is that it has been branded as “the HPC package manager”, and seems to handle package “options” a bit nicer.
That’s a way to separate installations based on build inputs. Spack has been hashing build provenance since 2013, flags were added to that in 2016, and it looks like stack started doing this in 2018. hashdist and guix were doing this in 2012 and of course the whole idea of a derivation hash came from Nix in 2004.
But @gnikit was asking:
This is at odds with hashing. The more fine-grained you make your build hash, the harder it is to reuse a build, because something will be ever so slightly different. These days, a Spack hash includes a canonicalized version of the package.py
recipe, flags, variants, the package’s version, the compiler, its version, OS, target arch, and all that for all dependencies. If you hash your configuration, then look for hash matches, and reuse only the exact matches, you get a pretty small amount of reuse.
To avoid this, we actually optimize for reuse when we do dependency solves. The solver by default prioritizes reusing already installed packages (or packages available from a build cache) if they don’t violate any requirements of what you’re trying to install. If you need a newer version of something or a particular option, Spack will build it. The implementation (and a comparison with reusing just based on hash matches) is described in this paper.
I would say the distinguishing features are the dependency model and the degree to which we are trying to parameterize and model constraints in the solver. Many other systems have solvers; most are there to select a compatible set of binaries or fixed build configurations. Nix models a lot about underlying dependencies, but it doesn’t have a solver – you get whatever’s at the nixpkgs
commit you checked out. Spack has both of those things, and the solver is selecting binaries and configuring source builds around them.
If you’re interested, there is more in this video from PackgingCon about more recent efforts to handle package ecosystem complexities (like duplicate package versions), and we’ve been able to start modeling compiler runtime libraries like libgfortran
and libifcore
(see the v0.22 release notes).
@tgamblin awesome, thanks for getting Spack working on Windows. I think I’ve been asking you for this feature for close 10 years now. I actually have a Windows machine now, so I can test it out!
One use case just from today: https://lfortran.zulipchat.com/#narrow/stream/197339-general/topic/Building.20Lfortran/near/440902397, the LLVM in Conda on Windows is built in Release mode, and MSVC doesn’t seem to allow mixing Release and Debug modes, which prevents us to build LFortran in Debug mode. Spack might be able to fix this.
Thanks for the clarifications @tgamblin . I don’t mean to sound like Spack isn’t a useful/valuable project, but I’m still getting a feel for the advantages and disadvantages of using it.