SpecialMatrices (v0.1.0-alpha): Pre-release

Hej everyone!

After LightKrylov earlier this week, I’m happy to announce another pre-release: SpecialMatrices v0.1.0-alpha! The code can be found here and the online documentation here. It really is just a minor package whose goal is to define some particularly common matrix types and associated computational routines. It draws inspiration from SpecialMatrices.jl and ToeplitzMatrices.jl.

While stdlib_linalg is focused on delivering very sane defaults for dense matrices, many such matrices in scientific computing are actually very structured, e.g.

  • Tridiagonal matrices appear all over the place when working with 2nd order accurate finite-difference approximation of differential operators.
  • Toeplitz matrices, defining a convolution operation, are a central piece of signal processing.
  • Hankel matrices are ubiquituous in control and estimation of linear time-invariant systems.
  • etc.

The aim of SpecialMatrices is thus to export some specialized types for such matrices and provide high-performing implementations of standard operations on such matrices (e.g. solve, matmul, eig, etc) which can then be used as drop-in replacement for stdlib_linalg’s more generic high-level routines. At the moment, this alpha version only provides the following types: Diagonal, Bidiagonal, Tridiagonal and SymTridiagonal and the following interfaces: matmul, solve, transpose, dense. Moreover, since this is work-in-progress, only real(dp)-versions are implemented at the moment. On the long run, I however plan to extend these with Circulant, Toeplitz and Hankel at least and make use of fypp for the different flavors of them.

Note that I’ve just started working on this project so the code ain’t optimized in any way at the moment, but at least it is there and already provides some improvement in terms of computational speed over the standard stdlib_linalg high-level routines when working with structured matrices. Although my plan is to wrap-up most of the real(dp) implementations for the moment (mostly because I want to use these for another research project of mine), anyone willing to participate is more than welcome!

PS: Again, I’d like to congratulate @FedericoPerini, @jeremie.vandenplas, @hkvzjal and all the others who worked on the stdlib_linalg_lapack module which is used under the hood by SpecialMatrices wherever possible.

16 Likes

Quick update for those interested: SpecialMatrices now provides the following utilities for Diagonal, Bidiagonal, Tridiagonal and SymTridiagonal matrices:

  • det(A) : Compute the determinant of A using specialized \mathcal{O}(n) methods instead of the classical \mathcal{O}(n^3) ones.
  • trace(A) : Compute the trace of A.
  • transpose(A) : Compute the transpose of A and returns a matrix of the same type.
  • matmul : Overloads the Fortran intrinsic matmul with drivers specialized for the particular types defined in SpecialMatrices.
  • solve : Solve a linear system Ax = b with specialized tridiagonal solvers (gtsv and ptsv from stdlib_lapack).
  • eig and eigvals : Eigenvalue decomposition for Bidiagonal and Tridiagonal matrices. Uses stdlib_eig under the hood since no specialized algorithms exist for general tridiagonal matrices.
  • eigh and eigvalsh : Eigenvalue decomposition for Diagonal and SymTridiagonal matrices using dedicated implementations provided by stdlib_lapack.
  • svd and svdvals : Singular value decomposition for all types of matrices defined in SpecialMatrices and using specialized solvers whenever possible.
  • operator(*) : Overloads the Fortran intrinsic operator * for scalar-matrix multiplication.

At the moment, a handful of routines are simple wrappers around stdlib_linalg functions even though dedicated drivers exist in stdlib_lapack. This was done more out of convenience/lazyness to get the interfaces out before actually optimizing the underlying implementations. The next few actions will be:

  1. Make use of specialized routines provided by stdlib_lapack wherever possible. This include the svd computation for Bidiagonal matrices but also the combined use of gttrs (factor-solve) and gtrfs (iterative refinement) for all tridiagonal solve.
  2. Implement a proper error handling strategy following closely what is being done in stdlib_linalg to maximize consistency with it.
  3. Make use of fypp to provide real and complex versions of the different types (as well as single and double precisions).

Since this is a small-ish project I work on while commuting, this will get me busy for some time. I am nonetheless curious about a few things and would like to have some community opinion:

  • As of today, could the methods implemented in SpecialMatrices be of direct use to anyone here in a current project (or project idea) except for myself?
  • Once the implementations for Diagonal, Bidiagonal, Tridiagonal and SymTridiagonal are done (including fypp stuff), I’m not sure which types of matrices should I include next. What I’ll eventually need one day or another in some of my projects or teaching are:
    • Circulant matrices: it does need a fft library though. Since it is fpm-compatible, fftpack seems like a natural choice but I don’t think it supports single precision yet. Is anyone working on this? Or should I simply rely on double promotion at the moment?
    • Toeplitz matrices: Kind of similar to Circulant regarding the use of fft.
    • Hankel matrices: Same thing again.
    • Companion matrices: Closely related to polynomials expressed in the standard monomial basis (i.e. 1, x, x^2, x^3, \cdots), there are some fast algorithms to compute the eigenvalues, solve a linear system, etc. Additionally, the singular values can be computed analytically very easily.
    • Vandermonde matrices: Naturally arises in polynomial interpolation problems, there actually is an \mathcal{O}(n^2) algorithm to solve Ax = b when A is Vandermonde.

I also plan to provide the Strang matrix and the Poisson2D one. This is pretty high on my to-do list but should be relatively easy since they are essentially variations of SymTridiagonal matrices.

4 Likes

Nice work!

A dumb question about those overloaded operators.
Just say the overloaded operator det(),

d = det(A)

In your SpecialMatrics module, is it that you detect the type of A matrix, like,
if it is detected that A is a general matrix then you use the default det(),
if it is detected that A is some kind of diagonal matrix then you use your specialized and faster implementation of det()?
If so, does this ‘detection’ impact the speed?

Thanks!

specialmatrices only adds to the det interface. If you use standard rank-2 arrays, you’d still need to use stdlib_linalg. Consider the following piece of code

program example
    use stdlib_linalg_constants, only: dp
    use stdlib_linalg, only: det
    use specialmatrices
    implicit none

    ! Dimension of the matrix.
    integer, parameter :: n = 100
    ! Rank-2 array version.
    real(dp), allocatable :: Amat(:, :)
    ! SpecialMatrices lazy representation
    type(Tridiagonal) :: A
    real(dp), allocatable :: dl(:), dv(:), du(:)
    real(dp) :: d

    ! Initialize arrays.
    allocate (dl(n-1), dv(n), du(n-1))
    call random_number(dl); call random_number(dv); call random_number(du)
    
    ! Initialize matrix.
    A = Tridiagonal(dl, dv, du) ! Lazy representation (3n-2 floats)
    Amat = dense(A) ! Array representation (n² floats). dense is another utility function provided by specialmatrices

    ! Compute det(A).
    d = det(A) ! A is of type(Tridiagonal). Uses a specialized routine under the hood.
    d = det(Amat) ! Amat is a rank-2 array. Uses stdlib_linalg_det under the hood.            
    
end program

Dispatch to the correct driver (i.e. specialmatrices for A or stdlib for Amat) is done at compile time. My understanding is that it does not incur any additional time.

2 Likes