We should maintain it under fortran-lang, and then possibly use it in stdlib as the default package. This should be configurable (via fpm and other build systems) so that one can build stdlib with MKL or OpenBlas. But using the reference Lapack (which is or can be made pure Fortran I believe) as the default should make it problem free to install, and thus we can add high level wrappers to stdlib.
When I first starting using Fortran with LAPACK as an undergrad I was surprised to find linear algebra operations were noticeably slower compared to MATLAB – this was because at the time I didn’t know about the difference between reference LAPACK vs. optimised implementations like OpenBLAS.
I’m not sure it is a good idea to use reference LAPACK as the default option here because it can’t compete with implementations like OpenBLAS which could lead to the perception that Fortran is slow for linear algebra in comparison to other languages.
Rather I think we should promote the use of optimised implementations and make it easy to do so.
This was one of the motivations behind quickstart-fortran which also install OpenBLAS for Windows making it easy to link against from fpm.
Granted, linking against MKL is still an open issue for fpm; see also:
I agree with @lkedward and @ivanpribec in general about LAPACK, unless there is a project to actually develop the code using co-arrays.
On the other hand I often build a local version from the distribution for miscellaneous uses.
One unusual one is in several environments I find the reference versions are faster when restricted to a single core intentionally (we often want jobs scheduled by users as single-core jobs to actually only use one CPU; and in many cases using optimized versions often means using parallel versions; which can actually be slower on one CPU). The optimized versions often require setting additional environment variables and so on to ensure they do not run in parallel and interfere with other jobs on the same nodes and then end up running slower. In some environments we can use NUMA control and processor affinity to the same effect more generally, even with pre-built executables and so on so this is a niche issue but an interesting one.
I did start a minimal build but not for the reason of making a LAPACK package per se, and had no intention to post it on github until now.
The code made a great test suite for trying out plusFORT and exercises fpm and compilers well; and actually cannot be built with fpm in several environments because of the length of the resulting system commands to build libraries and so on. It is totally untested but if anyone else wants to pursue a LAPACK package perhaps the files would be useful in other unexpected ways so I posted it and might even clean it up a bit if time permits; but in no way should anyone interpret it as a supported LAPACK version.
I would like the most commonly used LAPACK routines to have equivalents in standard Fortran; but optimized LAPACK versions are available on many platforms and LAPACK is one of the few historic packages supported by several Fortran vendors, etc… as mentioned above.
How many people build their own version of LAPACK from the reference versions even with it available in other formats? In little programs where optimization is not a huge issue I probably would use an fpm version of lapack if it selectively built (takes a while to build), but in general I want to use a pre-built tested version just for QA purposes and performance.
@lkedward did you compile Lapack in release mode with good compiler optimization options? I believe the default cmake even in Release mode does not enable --fast-math (for example). Maybe OpenBlas or MKL should be the default, and maybe the reference lapack can be available only when explicitly asked, that way we would not be encouraging its use.
Yes, it was in release mode but not with the fast math. I don’t know enough about the algorithms in LAPACK to know if I can enable fast math, however I assume that since the developers do not use it then it is perhaps not recommended. Moreover, I believe OpenBLAS doesn’t use fast math to achieve it’s performance so I’m not sure it’s relevant?
The tests had duplicates so even in the original -zmuldefs was required, and had type issues but if there is interest in the free-fortran version I mentioned above maybe I will make it an alternate version. Still need to get the tests working first, but I am finding I do want a basic portable version in fpm; even though I have several optimized versions available on machines where I use LAPACK in larger production code. I might back up and make a version without as many modifications though. Has anyone already done this? Are there fpm versions of lapack I missed?
I need to run and test them, but I think I got all the tests (not the actual libraries) into modules successfully. I had to take all the test-dependent procedures and modules and put them into a “fake” package and use them as a test dependency; could not get the tests to load right without doing that; the main libraries make such a long ar(1) command they break execute_command_line in several programming environments; and I could not use the pseudo-package as a test dependency without putting at least one test program in the fpm.toml file as it would not let me specify a test.dependency without one even with auto detect on; it has been interesting; but that is more appropriate for the fpm forum.
If nothing else LAPACK makes a dandy fpm test repository.
Several of our friends (@Euler-37@St_Maxwell ) also discussed this issue privately. We think that Fortran’s array capabilities are very strong, and we need routines such as inv and det.
As far as we know, numpy uses compiled OpenBLAS, and distributes different OpenBLAS on different systems and CPU architectures. (see link1 & link2 below)
Therefore, if fpm can achieve the distribution of binary link libraries, such as OpenBLAS, making high-level interface libraries will become feasible enough. (see link3)
Just to clarify, I haven’t seen anybody saying that “inv is not important”. What I have seen people saying, myself included, that inv is often not the best tool what is needed in production code (except a few niche applications), but it is very useful for debugging and experimenting and checking:
and we need to have it, for example as part of stdlib.