Generic or Kind-Agnostic linear system solvers

Dear All, today I would like to have some clarifications about handling generic procedures in Fortran without re-coding the same procedure for each set of new data types. I want to apply this to a very specific problem, that of solving a linear system with a Krylov method such as the pcg. The following code is taken from here where Dr. Enrico Facca kindly provided a template for a Krylov API. As you can see, the input field matrix is of derived type AbstractOperator and the vectors are of type Vectors.

 subroutine pcg(matrix,rhs,sol,ierr,&
       tolerance,max_iterations,preconditioner)
    use DVectors
    use AbstractOperators
    use IdentityOperators
    implicit none
    class(abstract_operator),           intent(inout) :: matrix
    [...]
    class(abstract_operator), target,optional, intent(inout) :: preconditioner
    [...]
    !
    ! cycle
    !
    iter=0
    do while (.not. exit_test)
       iter = iter + 1   
       ! compute  pres = PREC  (r_{k+1})
       call prec%apply(resid,pres,ierr)

       !  calculates beta_k
       if (iter.eq.1) then
          beta = 0.0d0
       else
          beta = -dot_dvector(pres,axp)/ptap
       end if

       !  calculate p_{k+1}:=pres_{k}+beta_{k}*p_{k}
       call axpby_dvector(1.0d0,pres,beta,pk)
          
       !  calculates axp_{k+1}:= matrix * p_{k+1}
       call matrix%apply(pk,axp,ierr)
       
       !  calculates \alpha_k
       ptap  = dot_dvector(pk,axp)
       alpha = dot_dvector(pk,resid)/ptap
       
       !  calculate x_k+1 and r_k+1
       call axpby_dvector(alpha,pk,1.0d0,sol)
       call axpby_dvector(-alpha,axp,1.0d0,resid)

       !  compute residum
       resnorm = norm_dvector(resid)/bnorm
     
       ! checks
       exit_test = ( ( iter .gt. max_iter ) .or. ( resnorm .le. tol ) )
       [...]
    end do

    [...]

  end subroutine pcg

Since this is just a combination of matrix-vector products, scalar products and vector updates, it would be interesting if there was a way to make it “kind agnostic”. Considering only for simplicity different kind of matrices, it would be amazing if the pcg could be applied to a tabular matrix, a CSR matrix, a COO matrix, a CSC matrix or even a user defined data structure, without redefining the pcg for each type of matrix. This last possibility is, IMHO, the key for user-defined parallelization, because you would need to define the data structure and its communication procedures and simply provide it to the pcg. As an example, imagine to solve the same linear system twice: the first time you provide the full matrix to the pcg as it is, the second time you have a domain decomposition procedure that imposes a block structure to the matrix. You might want to define the block structure as a derived data type, define a matvet function handling correctly the communication and use type-bound procedures to apply it, give this data structure to the pcg. This idea could be taken even further, because the concept could be extended to the data type DVectors to provide, as an example, an openMP DVector and an OpenACC DVector and so on.

Let’s consider the following case: there is one module implementing the pcg; two modules containing predefined matrix representation, let’s say tabular and CSR; the last module, defines a user-defined matrix representation. I would like the pcg to accept any of the three matrix representations, so it should be in some way kind-agnostic. The tabular matrix and CSR, as well as for the pcg, could be coming from some library (hopefully one day stdlib) that the user should not be recoding or modifying directly. Finally, the last module containing the user defined data type should be the only one directly accessible to the user, and yet perfectly interchangeable with the other two.

What would be your preferred approach to do this? Any suggestion for further reading is well accepted.
This could also be something interesting to develop in stdlib, especially if you could really have a set of solvers that only need to know the type of the matrices and vectors.

Feel free to ask for clarifications if you find this post not clear enough. I hope to come back during the weekend with a MWE.

This is a recurring concept in libraries for sparse matrix computation:

In older libraries (before Fortran supported polymorphism) they made the iterative solvers generic with respect to the matrix storage by using reverse communication. The ARPACK library also uses this approach.

Salvatore Fillipone and Damian Rouson (@rouson) have written articles on this topic:

You could even see it as a case of convergent evolution.

2 Likes

The SUNDIALS library of ODE solvers uses this approach just that they call them NVECTORs: 9.1. Description of the NVECTOR Modules — User Documentation for SUNDIALS documentation

An NVECTOR is a struct containing two items:

struct _generic_N_Vector {
   void *content;
   struct _generic_N_Vector_Ops *ops;
};

The content holds the data, and the ops is a struct containing a bunch of function pointers (a dispatch table or virtual method table).

In Fortran you would use polymorphism for this purpose (which is really the same thing, just that the compiler manages the virtual method table for you):

type, abstract :: generic_vector
contains
   procedure(p_getlength), deferred :: getlength
   procedure(p_linearsum), deferred :: linearsum
   procedure(p_prod), deferred :: prod
   procedure(p_div), deferred :: div
   procedure(p_scale), deferred :: scale
   ! ...
end type

Similar abstractions are also used for matrices and linear solvers in SUNDIALS.

This pattern is very nicely used in MATLAB and SciPy. For example if you look at the interface of pcg:

x = pcg(A,b)

the variable A can be a sparse matrix or a function handle:

Specifying A as a Function Handle

You can optionally specify the coefficient matrix as a function handle instead of a matrix. The function handle returns matrix-vector products instead of forming the entire coefficient matrix, making the calculation more efficient.

To use a function handle, use the function signature function y = afun(x)

The function handle could be a wrapper of a MEX function written in C or Fortran:

block_matrix = ...
afun = @(x) fortran_mex_function(x, block_matrix);

b = ones(n,1);
x = pcg(afun,b);

Also the sparse solvers in SciPy borrow this concept, e.g. scipy.sparse.linalg.cg:

scipy.sparse.linalg.cg(A, b, x0=None, *, ...)

Use Conjugate Gradient iteration to solve Ax = b.

Parameters:

A {sparse matrix, ndarray, LinearOperator}

The real or complex N-by-N matrix of the linear system. A must represent a hermitian, positive definite matrix. Alternatively, A can be a linear operator which can produce Ax using, e.g., scipy.sparse.linalg.LinearOperator.


As an example lets port the conjugate gradient routine in MATLAB from the work of Gilbert, Moler, and Schreiber (1992).

function x = cgsolve (A,b,tol)

% Solve A*x = b by the conjugate gradient method.
% Iterate until norm(A*x-b) / norm(b) <= tol.

   x = zeros(size(b));
   r = b;
   rtr = r'*r;
   p = zeros(size(b));
   beta = 0;
   while ( norm(r) > tol * norm(b) )
      p = r + beta * p;
      Ap = A * p;
      alpha = rtr / ( p' * Ap );
      x = x + alpha * p;
      r = r - alpha * Ap;
      rtrold = rtr;
      rtr = r'*r;
      beta = rtr / rtrold;
   end

Approach 1 - Callback Function


subroutine cgsolve(A,b,x,tol)
    implicit none
    interface
        ! Evaluates y := matmul(A,x)
        subroutine A(x,y)
            real, intent(in) :: x(:)
            real, intent(out) :: y(:)
        end subroutine
    end interface
    real, intent(in) :: b(:), tol
    real, intent(out) :: x(:)

    real :: rtr, rtrold, alpha, beta
    real, allocatable :: r(:), p(:), Ap(:)

    allocate(Ap,mold=b)
    allocate(r,source=b)
    x = 0.0
    rtr = dot_product(r, r)
    p = b
    beta = 0

    do while( norm2(r) > tol*norm2(b))
        p = r + beta * p
        call A(p,Ap) ! 
        alpha = rtr / dot_product(p, Ap)
        x = x + alpha * p
        r = r - alpha * Ap
        rtrold = rtr
        rtr = dot_product(r, r)
        beta = rtr / rtrold
    end do

end subroutine

Approach 2 - Polymorphism

module sparse_linalg
implicit none

type, abstract :: linop
contains
    procedure(matmul), deferred :: times 
end type

abstract interface
    ! evaluate y := op(x)
    subroutine matmul(op,x,y)
        import linop
        class(linop), intent(in) :: op
        real, intent(in) :: x(:)
        real, intent(out) :: y(:)
    end subroutine
end interface

contains

subroutine cgsolve(A,b,x,tol)
    class(linop), intent(in) :: A
    real, intent(in) :: b(:), tol
    real, intent(out) :: x(:)

    real :: rtr, rtrold, alpha, beta
    real, allocatable :: r(:), p(:), Ap(:)

    allocate(Ap,mold=b)
    allocate(r,source=b)
    x = 0.0
    rtr = dot_product(r, r)
    p = b
    beta = 0

    do while( norm2(r) > tol*norm2(b))
        p = r + beta * p
        call A%times(p,Ap) 
        alpha = rtr / dot_product(p, Ap)
        x = x + alpha * p
        r = r - alpha * Ap
        rtrold = rtr
        rtr = dot_product(r, r)
        beta = rtr / rtrold
    end do

end subroutine

end module

Approach 3 - Reverse Communication

I won’t attempt to explain this one in details as I believe it’s gone out of fashion. To some extent it’s a matter of taste whether you like it or not.

The concept is explained in

Victor Eijkhout (@victotronics) explained it as follows,

The idea [of reverse communication] is to make the function data-independent by keeping the data entirely outside its scope. Thus for every operation on that data you have to return, with a request parameter indicating what operation has to be performed outside, and you have to have a way to indicate where you resume after the operation is done.

I’ve written code like this, and it’s kinda neat, but I’m not entirely unconvinced that it’s not totally evil. Or not.

Quoting @Arjen Markus on the other hand:

Reverse communication may be complicated to implement, but as long as the user code remains simple - just a do loop and a select construction - it may be a very useful solution anyway.

I guess you can find more opinions if you dig through comp.lang.fortran. Unless you are writing code for an old thermodynamic database implemented using the Component Object Model (COM), it’s probably better to stick with callbacks or abstract types.

2 Likes

Building on top of what @ivanpribec has shown, one could imagine an architecture that enables genericity by exploiting polymorphism and function pointers. here a mwe:

module stdlib_krylov
implicit none

type :: linop
    procedure(mxm), nopass, pointer :: times => null()
    procedure(dot), nopass, pointer :: inner => null()
end type

abstract interface
    ! evaluate y := op(x)
    subroutine mxm(x,y)
        real, intent(in)  :: x(:)
        real, intent(out) :: y(:)
    end subroutine
    function dot(x,y) result(r)
        real, intent(in) :: x(:)
        real, intent(in) :: y(:)
        real :: r
    end function
end interface

interface cgsolve
    module procedure :: cgsolve_generic
    module procedure :: cgsolve_dense_sp
end interface

contains

subroutine cgsolve_generic(A,b,x,tol)
    class(linop), intent(in) :: A
    real, intent(in) :: b(:), tol
    real, intent(out) :: x(:)

    real :: rtr, rtrold, alpha, beta
    real, allocatable :: r(:), p(:), Ap(:)

    allocate(Ap,mold=b)
    allocate(r,source=b)
    x = 0.0
    rtr = A%inner(r, r)
    p = b
    beta = 0

    do while( norm2(r) > tol*norm2(b))
        p = r + beta * p
        call A%times(p,Ap) 
        alpha = rtr / A%inner(p, Ap)
        x = x + alpha * p
        r = r - alpha * Ap
        rtrold = rtr
        rtr = A%inner(r, r)
        beta = rtr / rtrold
    end do

end subroutine

subroutine cgsolve_dense_sp(A,b,x,tol)
    real, intent(in) :: A(:,:)
    real, intent(in) :: b(:), tol
    real, intent(out) :: x(:)
    !-------------------------
    type(linop) :: op
    !-------------------------
    op%times => custom_matvec
    op%inner => custom_caf_dot
    call cgsolve_generic(op,b,x,tol)
    contains
    subroutine custom_matvec(x,y)
        real, intent(in)  :: x(:)
        real, intent(out) :: y(:)
        y = matmul(A,x)
    end subroutine
    function custom_caf_dot(x,y) result(r)
        real, intent(in) :: x(:)
        real, intent(in) :: y(:)
        real :: r
        r = dot_product(x,y)
#ifdef CAF
        call co_sum(r)
#endif
    end function
end subroutine

end module

subroutine example_dense()
    use stdlib_krylov
    implicit none
    real :: A(2,2) = reshape([4.0,1.0,1.0,3.0],shape=[2,2])
    real :: b(2) = [1.0,2.0]
    real :: x(2)
    real :: tol = 0.0001

    call cgsolve(A,b,x,tol)
    print *, x

end subroutine


program main
    implicit none

    call example_dense()

end program

A Compiler Explorer execution showing output for gfortran/ifx/flang-new

With the DT for defining the operators (one could say, to define the vector space) it could be possible to extend even further to hold the factorization of the matrix for applying a preconditioner.

Kind-agnosticism could be enabled in two ways: using fypp for preprocessing base generic templates and/or with a vector class.

1 Like

Great demo of polymorphism, the strategy pattern, and the open-closed principle!

    real :: A(2,2) = reshape([4.0,1.0,1.0,3.0],shape=[2,2])
    real :: b(2) = [1.0,2.0]
    real :: x(2)
    real :: tol = 0.0001

    call cgsolve(A,b,x,tol)
    print *, x               !   9.09090936E-02  0.636363626 

The solution of the linear system,

\begin{bmatrix} 4 & 1\\ 1 & 3 \end{bmatrix} \begin{bmatrix} x_1\\ x_2 \end{bmatrix} = \begin{bmatrix} 1\\ 2 \end{bmatrix}

is x_1 = 1/11, x_2 = 7/11, which matches the result of the conjugate-gradient routine. :+1:

Hej,

Developper of LightKrylov here. I just found out about this discussion (and the discourse to be fairly honest). What you are suggesting @kimala is precisely what we are trying to do with LightKrylov. Having type-agnostic Krylov methods seemed like the way to go for some research projects I am working on and hence the reason we started this library. I’d be more than happy if others find it useful !

The package is still a bit rough around the edges but it actually works. We have been able to link it to our MPI-based computational fluid dynamics solver to compute the leading eigenvectors and singular vectors of the exponential propagator of our system (think the expm of the billion-dimensional Jacobian matrix evaluated by time-marching the equations). Comparing with another version which uses ARPACK, we observe very similar performances yet avoiding dependencies hell while also having a far simpler code base.
Using abstract types also enable us to easily link to another package we’re working on to obtain low-rank approximate solutions of matrix-valued differential equations (e.g. Lyapunov equations).

Although we only include implementation based on our abstract types at the moment, this could easily be included in LightKrylov by using an interface dispatching depending on the inputs, e.g.

  • A and b are standard fortran arrays, then use an implementation specialized with BLAS-2 calls
  • A and b are extended from abstract types, then use what is currently available in LightKrylov.

I think a community-wide effort would be beneficial to agree on a handful of base packages and abstract types from which everybody could extend to maximize interoperability. This is something going in the Julia community which I think is one of the reasons why the Julia scientific computing ecosystem has gotten so big so fast. This might deserve a dedicated discussion though.

1 Like

Bonjour @loiseaujc

thank you for joining the discussion and welcome. I have been checking the LightKrylov project and indeed it was a good prototype for what I am trying to implement. I would be interested in knowing how you implemented the parallelization of the operation, if you hide them behind an abstract type or not.

I think my next steps will be

  • (empty) abstract type for matrix and vectors
  • extend this data type for simple tabular matrices (basically using the example from Enrico Facca), this is intended to be just a simple example to show the basic functions of the library
  • extend data type for tabular matrices with blas/lapack
  • extend data type for CRS matrices (as I have already the code for that)
  • -pedantic documentation of these data types and how to construct new one

this covers basically all the simple cases where you don’t have distributed memory parallelization. Potentially, all these cases can be endowed with openMP and openACC directives already. After this, I will start working on solvers (basing myself on this book), but I think it would be nice to have an idea of how you handle parallelization of NEK from the beginning of this development, it would avoid making stupid and limiting mistakes.

I hope you will find this conversation interesting enough to stay in touch, I have met a lot of people working with NEK5000 so I truly think that if part of nek can be bridged to stdlib that would be an incredible test case for the library and a boost for its development. Moreover, I personally think I have a lot to learn from the development of your tools.

1 Like

The parallelization is handled directly by the user when defining the type-bound procedures of their extended abstract_vector type (in particular the notion of norm and when defining the matrix-vector product for the extension of the abstract_linop). LightKrylov does not need to know anything about it. We’re about to add an example where we look for the leading eigenvectors of a 3D laplacian using MPI. I’ll let you know as soon as it’s there.

On a more general note, one thing that might be useful would be to have (maybe in stdlib) a definition of the most basic abstract types (e.g. abstract_linop, abstract_vector, etc) and convince every one to use extensions of these basic types for maximum interoperability. I was planning to start a new thread dedicated to discuss some community-wide effort to agree on some standard abstract_type and associated type-bound procedure interfaces. What I had in mind is something like :

  • stdlib provide some abstract_type every body can/should use.
  • stdlib_linalg provides implementation of standard linear algebra routines based on say blas and lapack and working directly with intrinsic types and also define some standard interfaces.
  • LightKrylov (and others) could then use the stdlib abstract types and conform (whenever possible) to the interface of stdlib_linalg to extend directly these capabilities to whatever users mean by an element of an inner-product vector space.

As an example, consider the awesome rklib by @jacobwilliams. As far as I understand, it assumes that x is a standard fortran array although, mathematically it is a vector (whatever the data structure used to represent this vector). Re-implementing part of rklib using the abstract_vector type and the type-bound procedure norm associated to it, one could then use all of the extensive list of RK integrator @jacobwilliams implemented using their own definition of what a vector is while making rklib completely agnostic to the underlying data structure.

Does that make sense ? I’ve been thinking about this for the past few days but haven’t yet really had time to form a very precise idea, even less expressing it clearly to others. I think this deserve a dedicated topic and broader discussion on what we would like to achieve as a community-driven scientific computing ecosystem in fortran.

2 Likes

I’ve seen that @certik, @jacobwilliams and @ivanpribec are pretty active (probably many others as well but I’ve just started to go through the discourse so sorry about that). Maybe they know of some topics where such ideas are being discussed or if such a community-wide effort/discussion has already started.

@loiseaujc welcome to the forum!

Yes, we have an active effort for modern Fortran linear algebra. You can check out the current status in this thread: Weather and climate modeling codes from Fortran to C++ - #46 by FedericoPerini (yes, the original topic was hijacked somewhat).

@FedericoPerini is leading the effort.

Welcome @loiseaujc and congrats on your research efforts!

As @certik said we’re bringing high-level APIs into the Fortran Standard Library. You may want to check out some of the prototypes at GitHub - perazz/fortran-lapack: Modularized Fortran LAPACK implementation, they should be pretty close to what’s going to gradually fit into stdlib.

The current effort is focused on bringing a high-level linear algebra API to Fortran, that most people are used to from Python frameworks. So, no great abstractions, but rather simple, clear interfaces that will make people want to start using Fortran without having to deal with ancient LAPACK calls, or manually building and linking against linear algebra libraries. @hkvzjal has also started working on some sparse matrix storage capabilities.

I like the way you’re abstracting out Krylov systems - I also have a similar approach in my FRESCO code, although that is not in the public domain. I would love to see the LightKrylov package grow into a complete set :slight_smile: there are definitely a lot of potential areas for both research and high-performance applications!

2 Likes

Hej,

This sound exciting ! I went through your repo to look at the API (and get some ideas of how fypp works, I ain’t familiar with it yet). I also took some time to go through different linalg-related issues on the stdlib repo (#749, #628, etc) while commuting yesterday night and this morning.

I understand and agree that the API and the implementations for standard matrices (i.e. sparse and dense) needs to be stabilized before thinking about adding another layer of abstraction. To the extent it’s possible, I’ll try to make LightKrylov stick as close as possible to this tentative API anyway. I’m still convinced though that, in the long run, having stdlib_linalg to define/export some fundamental abstract types and abstract interfaces for standard linear algebra tasks (e.g. pcg, gmres, eigs, svds, etc) would be beneficial. External packages using these types and conforming to these interfaces would then be interchangeable. As an example, consider the task of computing the leading eigenpairs of a linear operator. The user would simply have to define extended types for their definitions of linear operators and vectors, and then call eigs(A, x, lambda, nev) for instance to get the leading eigenpairs. Changing the backend (e.g. LightKrylov or ARPACK for instance) would be as simple as changing a single deps in the toml file (assuming they use fpm, which they should).
This wouldn’t prevent each package to expose additionally its own lower-level API with maybe more options for increased performances.

In Julia, I think something like LinearSolve.jl is precisely trying to define these basic/common interfaces to make interoperability far easier. And from what I’ve seen and my little experience with Julia, this interoperability of packages is actually a big plus for this language.

1 Like

That would be great @loiseaujc!

I believe the high-level interfaces would not change that much. We can just have interfaces to the abstract linear operators, as I think embedding intrinsics matrices (think real :: A(:,:)) into a derived type is not an option for most people. It looks like that is something similar to what the Julia package is doing (I just VERY quickly skimmed through it). If you’ve got something like

type, abstract :: linear_operator
   ! any data here: preconditioner, etc.
contains
   procedure(la_matmul), deferred :: matmul
   ...
end type linear_operator

Then you would have

interface eigvals
        module procedure stdlib_linalg_eigvals_s
        ... ! other intrinsics interfaces
        module procedure stdlib_linalg_eigvals_la
end interface eigvals

!> Return an array of eigenvalues of matrix A.
subroutine stdlib_linalg_eigvals_la(a,err,lambda)
         class(linear_operator), intent(in) :: a
         type(linalg_state),intent(out) :: err
         class(vector), intent(inout) :: lambda
end function

Now the problem with polymorphism is that you’ll have to use all subroutines, so you can pass your type(vector_whatever), intent(inout) otherwise you’d end up defining class(vector), allocatable which is an utter nightmare and probably extremely slow. The alternative would be to have an interface for each linear operator and vector types, but then, it’s not really polymorphic anymore.

I might say a very stupid and short-sighted thing here, so please correct me if you think I am wrong.

But I think that the user that will use a Krylov method with a real :: A(:,:) array is not going to be knees deep in optimization and performance, so once the Krylov library is implemented for derived types we can specialize it for intrinsic arrays and differentiate them in the call, and maybe this will be less work than accommodating all the range of users from those who wants to use intrinsic array to those that needs distributed memory parallelism.
We could have stdlib_linalg_pcg for intrinsics (or _spcg as in simple pcg), and stdlib_krylov_pcg for abstract operators, in my head when you know that you are in Krylov realm you are already walking down the rabbit hole.

Absolutely @kimala, that’s exactly what I was trying to suggest here

The advantage of the interface block is compile-time path resolution, rather than the runtime resolution that you would have with a polymorphic version

Yesterday I stumbled upon the PyLops package, that I don’t think we mentioned before, but seems really interesting to compare what we (pluralis maiestatis) would like to achieve with what is implemented in other languages.

API proposals

C++

(Apparently) Python/Mixed

  • PyLops, an open-source Python library focused on providing a backend-agnostic, idiomatic, matrix-free library of linear operators and related computations.
  • PETSc (with its Krylov suite, Matrix suite and Vector suite), the Portable, Extensible Toolkit for Scientific Computation, pronounced PET-see (/ˈpɛt-siː/), is for the scalable (parallel) solution of scientific applications modeled by partial differential equations (PDEs).
  • SciPy (with its LinearOperator interface)

Fortran

  • LightKrylov, leverages Fortran’s abstract type feature to provide generic implementations of the various Krylov methods. The only requirement from the user to benefit from the capabilities of LightKrylov is to extend the abstract_vector and abstract_linop types to define their notion of vectors and linear operators.

Feel free to notice me other resources

Edit: Adding resources as people list them, thanks to @hkvzjal

2 Likes

Not a package but a standardization initiative: the Python Array API Purpose and scope — Python array API standard 2023.12 documentation

Purpose and scope

Introduction

Python users have a wealth of choice for libraries and frameworks for numerical computing, data science, machine learning, and deep learning. New frameworks pushing forward the state of the art in these fields are appearing every year. One unintended consequence of all this activity and creativity has been fragmentation in multidimensional array (a.k.a. tensor) libraries - which are the fundamental data structure for these fields. Choices include NumPy, Tensorflow, PyTorch, Dask, JAX, CuPy, MXNet, Xarray, and others.
The APIs of each of these libraries are largely similar, but with enough differences that it’s quite difficult to write code that works with multiple (or all) of these libraries. This array API standard aims to address that issue, by specifying an API for the most common ways arrays are constructed and used.
Why not simply pick an existing API and bless that as the standard? In short, because there are often good reasons for the current inconsistencies between libraries. The most obvious candidate for that existing API is NumPy. However NumPy was not designed with non-CPU devices, graph-based libraries, or JIT compilers in mind. Other libraries often deviate from NumPy for good (necessary) reasons. Choices made in this API standard are often the same ones NumPy makes, or close to it, but are different where necessary to make sure all existing array libraries can adopt this API.

1 Like

C++

Rust

2 Likes

In a private discussion, @loiseaujc told me

This actually got me thinking at how to handle an interface to different routines changing in their inner operations but not in their arguments.

@milancurcic’s book is quite clear about this, it states (page 248-249)

I give you some examples for which these four strategies are unsatisfying

  • “Conjugate Gradient implemented with intrinsics” vs “Conjugate Gradient implemented with lapack/blas”: the two routines clearly share all their passed arguments, and all the passed arguments can be the same. Point 2 and 3 clearly do not apply. I could then use pcg(A, b, x, info) for the first one and pcg(A, b, x, info, useblas=.true. ) for the second one, but if I change it to (A, b, x, info, useblas=.false. ) I will be still calling the blas one, so the user should all the time remove the whole useblas=.true. from source rather than switching on/off somewhere.
  • “Left Preconditioned CG” vs “Right Preconditioned CG”: Same as before, but a little less trivial. Sure, one could setup an argument to the procedure prec='left' or prec='right' and use a select case inside the routine, but is it a good strategy?
  • “matrix multiplication preconditioner” vs “backward-forward substitution preconditioner”: you provide a matrix for the preconditioner, but now how to use one routine applying the matrix as a matrix-vector product or another performing a backward-forward substitution (because maybe you provide the lower or upper triangular factor).

I would like to challenge you to give me suggestions without invoking abstract operators, I will start soon thinking at them but not now.
Thank you very much

Nice analysis @kimala!

This command deals with library options, I would believe the best way to approach it is at compile time → handle the internals with preprocessor macros

Preconditioning is an operation that has the form of some matrix multiplication/inversion, but in practice, it is a strategy to simplify the underlying problem algebra. So imho it’s important that it’s handled flexibly. For example, if you want Jacobi preconditioning, you only need to manipulate the diagonal, not the whole matrix, etc.

What did old codes do? Ask the user to provide a “preconditioning” routine that applies M^{-1}x. I think it’s exactly the right approach. You may have it in many ways. For example, one could be:

procedure(precondition_left), pointer :: left_prec => null()
procedure(precondition_right), pointer :: right_prec => null()

whether either (or none) of these functions is provided on initialization.

1 Like