Radial basis function generated finite differences are a young numerical method for solving PDEs or performing interpolation on unstructured points in multiple dimensions. An approachable description of the method for solving PDEs can be found in the book by Fornberg and Flyer, A Primer on Radial Basis Functions with Applications to the Geosciences, SIAM, 2015, or alternatively, the article by the same authors, Solving PDEs with Radial Basis Functions, Acta Numerica, 24, 2015
(If I’m not mistaken the idea of using radial basis functions for function approximation is originally from MJD Powell , who is also the original author of COBYLA and other optimization algorithms available in PRIMA.)
Modern variations of the RBF-method involve solving dozens of small linear systems for the shape factors in the irregular stencils. The linear systems have the form,
where A is a matrix with entries A_{ij} = \phi(|\boldsymbol{x}_i - \boldsymbol{x}_j|), that are the radial basus functions \phi(\bullet) (for instance \phi(r) = r^3), and P is a polynomial augmenation matrix. In 2-d this will usually be a Vandermonde-type matrix,
where the maximum order of the polynomial basis, n_P, determines the accuracy of the approximation.
In Fortran pseudo-code, the setup of the matrix for a 2-d problem looks something like this:
integer, parameter :: npoly = 6
integer, parameter :: ix(npoly) = [0,1,0,2,1,0], &
iy(npoly) = [0,0,1,0,1,2]
real(wp) :: A(n,n), P(n,npoly), M(n+npoly,n+npoly)
! RBF part, A_ij = phi(|x_i - x_j|)
A = phi( hypot(spread(x,1,n)-spread(x,2,n), &
spread(y,1,n)-spread(y,2,n) )
! Polynomial part, P
do k = 1, npoly
P(1:n,k) = (x**ix(k)) * (y**iy(k))
end do
! Full matrix, M = [A P; P^T 0]
M(1:n,1:n) = A
M(1:n,n+1:n+npoly) = P
M(n+1:n+npoly,1:n) = transpose(P)
M(n+1:n+npoly,n+1:n+npoly) = 0
The next step is to solve the RBF-FD system. In MATLAB this can be achieved with the backslash operator - M\rhs
. In Fortran I’m using LAPACK directly:
! ... prepare M
np = n + npoly
call dgetrf(np, np, M, np, ipiv, info) ! factorize
! ... prepare RHS
call dgetrs('N', np, 1, M, ldm, ipiv, rhs, np, info) ! solve
In practice, the dimensions of the matrices are n
from 9 to 60, and npoly
in the range 6-28 (sufficient for a sixth order accurate method in 2-d), so the individual matrices are not that large.
Since these operations are to be performed millions of times during the course of the computation, I’ve been wondering if it’s possible to exploit the known block structure and other matrix properties to increase the performance of the factorization and solution steps?
I posted the same question two years ago on SciComp Stack Exchange, but to no avail: