I run a simple program to solve a system of linear algebraic equations with a sparse matrix using Sparspak90.
The simulation is run in WSL2 with Ubuntu-22.04 on a Windows 10 machine. The code is compiled with gfortran, with the default implementations of LAPACK and BLAS.
The weird thing is the time measurement. I get
StubbyKExample$ rm *.exe; ./compile_SimpleExample.sh ; time ./SimpleExample.exe
63070 63070 4223032
in FindOrder
Calling MMD
before factor
7.98282814 217.167267
after factor
---------------------------- Timing Information ----------------------------
Time for Ordering 0.078
Time for symbolic factorization 0.090
Time for matrix input 0.285
Time for factorization 20.921
Time for forward/back solve 0.023
----------------------------------------------------------------------------
real 3m45.192s
user 3m36.989s
sys 0m0.417s
So the total time of the run seems to agree with the reported time obtained with cpu_time
( 217.167267 - 7.98282814)
call cpu_time(T1)
call LUFactor(s%n, s%nsuper, s%xsuper, s%snode, &
s%xlindx, s%lindx, s%xlnz, s%lnz, s%xunz, s%unz, &
s%ipiv, s%errflag)
call cpu_time(T2)
print *, T1, T2
However, this does not agree with the statistics printed from the measurements with GetTime
(20.921)
call GetTime(s%factorTime)
call Factor(s%slvr)
call GetTime(s%factorTime, s%factorTime)
Now, I am tempted to believe the measurement of 20.9 seconds, because the CHOLMOD UMFPACK sparse solver (called from Julia) deals with the task in the same environment, the same method similar methods (supernodal LU vs multi-frontal LU), in about 20.5 seconds. But then, what is the code doing for the remaining three minutes?
My final datapoint is a time measurement of the Julia rewrite of the Fortran code. That takes ~27 seconds for the factorization. Again, same environment.
Edit: Originally, the comparison was with chol
(hence CHOLMOD), but in order not to compare apples with orangutans I switched to lu
(UMFPACK), forgetting to edit the post. Sorry about the mix up.