GSoC 2023 Contributor - spMatrices or Singular Multidimensional Quadrature


I’m Adam, a physics PhD student. I know quite a few languages and have programmed for about a decade, though with Fortran only for about a year. Despite that, I quite enjoy (modern) Fortran! I attended the recent monthly call, it was nice to meet everyone. I’ve been on the fence about GSoC, but figured I’d post here about what I’d be interested in and get feedback while I work on a patch/PR.

If you want my previous contribution credentials, I contributed to the nonlinear conjugate gradient solvers in the PETsc-TAO package a few years back [during an internship right before I started my PhD]. In particular, I implemented a few modern CG variants, and added diagonal-preconditioning to all CG methods based on limited-memory BFGS methods. You can find my name as an author in the User’s Manual publication.

I have two interests for the GSoC:

  • Multidimensional Quadrature with Integrable Singularities (basically implementing these two papers: Paper1 Paper 2 )
    • This is my higher interest, as it is somewhat relevant to my research and so I can justify the time spent to my PhD advisor.
    • This is when you have a form of the integrand f(x)/(z-g(x)) [user knows f(x), g(x) or has them numerically evaluated] and want to compute the (parameterized) integral over a domain I(z); there are also analytic forms to compute integrals of I(z) at the same time.
    • There is a hugely limited version of this implemented as the so-called “Tetrahedron method” in the condensed matter community, but these papers outline a huge generalization to the method.
    • It is a desirable method when the denominator leads to an integrable singularity, but it is difficult to compute f(x) or g(x) for a lot of points [so adaptive refinement isn’t a great option].
    • This would be a good tool at least to the computational physics community, but I imagine would also be useful for certain FEM codes among other applications.
      • Current Interest: I know developers for GPAW and EPW who want this made (so long as it’s fast, easy to include [i.e. self-contained], and parallel-aware)
    • It may be too niche for stdlib, and in fact it’s fairly complex. But I would like it to be implemented and open-source. So I don’t know if this qualifies as a valid GSoC 2023 project; feedback would be welcome.
  • Sparse matrix support in stdlib_linalg
    • This is less relevant to my personal research [and therefore will be able to spend less time on it], but I do have a special place in my heart for sparse matrices. Saving memory is a blast.
    • Amongst having support for CSR and COO, I think special attention for banded matrices (dia_array in scipy.sparse) and Kronecker-product matrices (common for [N>1]-dimensional derivative stencils) could make sense. They would each get their own matvecs. Also, Kronecker-product matrices could have either sparse or dense matrices comprising them.
      • I will probably add in the Kronecker product (for dense → dense) to stdlib_linalg for my patch/PR requirement. I have a working snippet, just gotta figure out fypp :slight_smile:
        • Should I add two implementations that are stylistically different (and with somewhat different performance) and let whoever manages the PR decide which one to keep, or just add the one that’s faster for big (dense) matrices?

Hi Adam,

Quite an impressive and detailed work! welcome to the community!

I think that your point 1, which is closer to your current research, might be indeed quite a “niche” subject, but hey, most cool and useful libraries start as niche ideas that get curated over time. I would let the most experienced Fortran members speak on that, but as you said yourself, it might not be the most adequate project for GSoC. Anyhow, you could always start a repo and get help and feedback from the community. If you make it a “fpm” package, that would get you some extra momentum :wink:

I think maybe the sparse matrix support would be a “less glorious-but-broader impact” contribution. Beyond looking at the official repo of stdlib, you might also want to take a look at @certik 's experimental fork stdlib/src at csr · certik/stdlib · GitHub he has already started a nice non-oop work for the base kernels in stdlib_experimental_sparse.f90 and in the discussions there are some interesting exchanges on what is expected/goals of the implementation. In the discussions there is also the point of having an OOP interface to get an user experience similar to that of SciPy when creating/transforming/using different sparse matrices. The whole topic is already one of the GSoC items listed by the community, so maybe take a look at those discussions and bring your ideas forward :+1:

One step further here: if the sparse lib gets promoted, it should indeed offer a fair exposition of the basic “taxonomy” of sparse matrices: COO/CSR/CSC/LIL/ELLPACK,etc, each one with its basic kernels for matmult, matvec, and conversions between them.

I think that a contribution for a standard library should keep a balance between performance and generality such that the implementation can be easily tested and understood when reading it. If you have a fast implementation that you think is also easily maintainable, you should propose it. If you feel that it is somewhat complex, maybe start with something easier and then explore ways of doing generalization by high-level conditional branching to keep performances.

Just some words of encouragement!


On this topic, you might want to look beyond FEM and think about BEM (boundary element methods) or MoM (Method of Moments) which use extensively Green’s Function to compute the fully connected matrices. Just to give you a potential way of exposing your work to other communities.

1 Like

In the discussions there is also the point of having an OOP interface to get an user experience similar to that of SciPy when creating/transforming/using different sparse matrices.

Indeed, generally NumPy and SciPy usage patterns make me quite happy when I’m using them.

Excellent to hear, that’s the version I decided to push.

Your other comments gave me much to think about!

1 Like