# Defining real and complex variants of an abstract type for vector calculus

Hej al,

For the past few months, I have been working on LightKrylov, a lightweight Fortran library abstracting out many of the details of Krylov-based methods for large-scale linear algebra problems. At its core is the notion of an `abstract_vector` which is agnostic to the user’s data structure defining what they mean as `vector` and the possibly underlying parallelization. This abstract type basically looks like

``````type, abstract, public :: abstract_vector
contains
procedure(abstract_scal) , pass(self), deferred, public :: scal
procedure(abstract_axpby), pass(self), deferred, public :: axpby
procedure(abstract_dot)  , pass(self), deferred, public :: dot
procedure, pass(self)                                   :: norm
end type abstract_vector
``````

with the abstract interfaces defined as

``````abstract interface
subroutine abstract_scal(self, alpha)
import abstract_vector, wp
class(abstract_vector), intent(inout) :: self
real(kind=wp), intent(in) :: alpha
end abstract_scal

subroutine abstract_axpby(self, alpha, vec, beta)
import abstract_vector, wp
class(abstract_vector), intent(inout) :: self
real(kind=wp), intent(in) :: alpha
class(abstract_vector), intent(in) :: vec
real(kind=wp), intent(in) :: beta
end subroutine abstract_axpby

real(kind=wp) function abstract_dot(self, vec) result(alpha)
import abstract_vector, wp
class(abstract_vector), intent(in) :: self, vec
end function abstract_dot
end interface
``````

where `wp` is my working precision. The subroutine `norm` is defined inside the associated module as

``````real(kind=wp) function norm(self) result(alpha)
class(abstract_vector), intent(in) :: self
alpha = sqrt(self%dot(self))
end function norm
``````

Based on the defiintion of `abstract_axpby` and `abstract_dot`, it is implied implicitly that this `abstract_vector` is basically a real-valued vector. This works fine and I’ve been able to link `LightKrylov` with the MPI-based CFD solver Nek5000 to compute the leading eigenpairs and singular triplets of my linear operators of interest (themselves are extended from an `abstract_linop` type, billion-dimensional vectors on a few hundreds of processors). For the eigenvectors, using an array of derived `abstract_vectors`, I’m following the LAPACK convention of storing the real and imaginary parts in the `i`-th and `i+1`-th element of the array. So far so good.

The problem I’m facing now is extending the whole thing to complex arithmetic without having too much duplicated code. The easiest for me would be keep `LightKrylov` as it is and force the user to rewrite their linear algebra problem in \mathbb{C}^n as an equivalent problem in \mathbb{R}^{2n}. This is the approach I’m currently using but I ain’t too happy with it as it puts a lot of burden on the user. In particular, whenever I want to make the left and right eigenvectors bi-diagonal, I need a bit of bookeeping.

Another approach would be to replace everywhere in the code base `abstract_vector` with `abstract_real_vector` and define a new type `abstract_complex_vector` along with writing the associated variants of the subroutines. While this would be fine, there still are some subroutine that should work (ideally without having to write two versions of them) irrespective of whether `abstract_real_vector`s or `abstract_complex_vector`s are being passed. I’m not sure if I’m entirely clear, but I was wondering if there is some more “fortranic” ways to handle this problem than duplicating everything for real and complex-value vectors ?

What about the solution below (just illustrated for `scal`)?

``````type, abstract, public :: abstract_vector
private
contains
procedure(abstract_scal) , pass(self), deferred, public :: scal
end type abstract_vector

abstract interface

subroutine abstract_scal(self, alpha)
import abstract_vector, wp
class(abstract_vector), intent(inout) :: self
class(*), intent(in) :: alpha
end subroutine abstract_scal

end interface
``````

And on the user side:

``````type, extends(abstract_vector) :: vector
real, allocatable :: a(:,:)
contains
procedure, pass(self) :: scal
end type

contains

subroutine scal(self, alpha)
class(vector), intent(inout) :: self
class(*), intent(in) :: alpha
select type(alpha)
type is (real)
self%a = alpha * self%a
end select
end subroutine scal

end
``````

Unlimited polymorphism certainly is an option indeed although I ain’t very familiar with it. How would it play along with the rest of the library ? Consider for instance the subroutine computing the Arnoldi factorization of a linear operator `A`. Simplified version of this code is shown below.

``````subroutine arnoldi_factorization(A, X, H)
!! Compute the Arnoldi factorization A @ X = X @ H.
class(abstract_linop), intent(in) :: A
!! Linear operator to be factorized.
class(abstract_vector), intent(inout) :: X(:)
!! Orthogonal Krylov subspace.
real(kind=wp), intent(inout) :: H(:, :)
!! Upper Hessenberg matrix.

! Internal variables.
integer :: i, k
real(kind=wp) :: alpha

do k = 1, size(X)-1
! Matrix-vector product X[k+1] = A @ X[k].
call A%matvec(X(k), X(k+1))
! Update Hessenberg matrix and orthogonalize current vector.
do i = 1, k
alpha = X(i)%dot(X(k+1)) ; H(i, k) = alpha
call X(k+1)%axpby(1.0_wp, X(i), -alpha)
enddo
! Normalize current vector
alpha = X(k+1)%norm() ; H(k+1, k) = alpha
call X(k+1)%scal(1.0_wp / alpha)
end do

return
end subroutine arnoldi_factorization
``````

Here, `A` is an abstract linear operator with the `matvec` type-bound procedure satisfying the following interface

``````abstract interface
subroutine matvec(self, x, y)
!! Compute the matrix-vector product y = A @ x.
import abstract_vector, abstract_linop
class(abstract_linop) , intent(in)  :: self
class(abstract_vector), intent(in)  :: x
class(abstract_vector), intent(out) :: y
end subroutine matvec
end interface
``````

Would taking the unlimited polymorphism route implies that:

• the Hessenberg matrix `H` needs to be defined as `class(*), intent(inout) :: H(:, :)`,
• the number `alpha` be also defined as `class(*) :: alpha`,
• Pepper the code with some `select class`/`end select` every time I need to access one of these variables ?
• If the answers to the first two questions are yes, does it also imply that, in order to use `arnoldi_factorization` in their code, users would have to declare `H` as an unlimited polymorphic variable as well ?

The idea I originally had in mind was something along the following lines

``````type, abstract, public :: abstract_vector
!! Base type from which all other vector types will be extended.
contains
end type abstract_vector

!----------------------------------------------------------------
!-----     ABSTRACT REAL VECTOR + TYPE-BOUND PROCEDURES     -----
!----------------------------------------------------------------

type, abstract, extends(abstract_vector), public :: abstract_real_vector
contains
private
procedure(abstract_real_scal) , pass(self), deferred, public :: scal
procedure(abstract_real_axpby), pass(self), deferred, public :: axpby
procedure(abstract_real_dot)  , pass(self), deferred, public :: dot
procedure                     , pass(self), public           :: norm
end type abstract_vector

abstract interface
subroutine abstract_real_scal(self, alpha)
import abstract_vector, wp
class(abstract_vector), intent(inout) :: self
real(kind=wp), intent(in) :: alpha
end abstract_real_scal

subroutine abstract_real_axpby(self, alpha, vec, beta)
import abstract_vector, wp
class(abstract_vector), intent(inout) :: self
real(kind=wp), intent(in) :: alpha
class(abstract_vector), intent(in) :: vec
real(kind=wp), intent(in) :: beta
end subroutine abstract_real_axpby

real(kind=wp) function abstract_real_dot(self, vec) result(alpha)
import abstract_vector, wp
class(abstract_vector), intent(in) :: self, vec
end function abstract_real_dot
end interface

!-------------------------------------------------------------------
!-----     ABSTRACT COMPLEX VECTOR + TYPE-BOUND PROCEDURES     -----
!-------------------------------------------------------------------

type, abstract, extends(abstract_vector), public :: abstract_cmplx_vector
contains
private
procedure(abstract_cmplx_scal) , pass(self), deferred, public :: scal
procedure(abstract_cmplx_axpby), pass(self), deferred, public :: axpby
procedure(abstract_cmplx_dot)  , pass(self), deferred, public :: dot
procedure                      , pass(self), public           :: norm
end type abstract_vector

abstract interface
subroutine abstract_cmplx_scal(self, alpha)
import abstract_vector, wp
class(abstract_vector), intent(inout) :: self
cmplx(kind=wp), intent(in) :: alpha
end abstract_real_scal

subroutine abstract_cmplx_axpby(self, alpha, vec, beta)
import abstract_vector, wp
class(abstract_vector), intent(inout) :: self
cmplx(kind=wp), intent(in) :: alpha
class(abstract_vector), intent(in) :: vec
cmplx(kind=wp), intent(in) :: beta
end subroutine abstract_cmplx_axpby

cmplx(kind=wp) function abstract_cmplx_dot(self, vec) result(alpha)
import abstract_vector, wp
class(abstract_vector), intent(in) :: self, vec
end function abstract_cmplx_dot
end interface
``````

I would also need some subroutines to do `cmplx_vector = cmplx_vector + real_vector` although the opposite is not the case. Then have something like what follows for Arnoldi

``````interface arnoldi_factorization
module procedure real_arnoldi_factorization
module procedure cmplx_arnoldi_factorization
end interface

subroutine real_arnoldi_factorization(A, X, H)
!! Compute the Arnoldi factorization A @ X = X @ H.
class(abstract_linop), intent(in) :: A
!! Linear operator to be factorized.
class(abstract_real_vector), intent(inout) :: X(:)
!! Orthogonal Krylov subspace.
real(kind=wp), intent(inout) :: H(:, :)
!! Upper Hessenberg matrix.

! Internal variables.
integer :: i, k
real(kind=wp) :: alpha

do k = 1, size(X)-1
! Matrix-vector product X[k+1] = A @ X[k].
call A%matvec(X(k), X(k+1))
! Update Hessenberg matrix and orthogonalize current vector.
do i = 1, k
alpha = X(i)%dot(X(k+1)) ; H(i, k) = alpha
call X(k+1)%axpby(1.0_wp, X(i), -alpha)
enddo
! Normalize current vector.
alpha = X(k+1)%norm() ; H(k+1, k) = alpha
call X(k+1)%scal(1.0_wp / alpha)
end do

return
end subroutine arnoldi_factorization

subroutine cmplx_arnoldi_factorization(A, X, H)
!! Compute the Arnoldi factorization A @ X = X @ H.
class(abstract_linop), intent(in) :: A
!! Linear operator to be factorized.
class(abstract_cmplx_vector), intent(inout) :: X(:)
!! Orthogonal Krylov subspace.
cmplx(kind=wp), intent(inout) :: H(:, :)
!! Upper Hessenberg matrix.

! Internal variables.
integer :: i, k
cmplx(kind=wp) :: alpha

do k = 1, size(X)-1
! Matrix-vector product X[k+1] = A @ X[k].
call A%matvec(X(k), X(k+1))
! Update Hessenberg matrix and orthogonalized current vector.
do i = 1, k
alpha = X(i)%dot(X(k+1)) ; H(i, k) = alpha
call X(k+1)%axpby(1.0_wp, X(i), -alpha)
enddo
! Normalize current vector.
alpha = X(k+1)%norm() ; H(k+1, k) = alpha
call X(k+1)%scal(1.0_wp / alpha)
end do

return
end subroutine arnoldi_factorization
``````

What would be the benefit of unlimited polymorphism compared to these real/complex definitions of the different procedures ? And to what extent would unlimited polymorphism impact the overall performances ?

The reason I’m asking these details is in relation with this discussion. I basically want to make this library as user-friendly as possible by exposing a `scipy`-like API in the end for user-defined notions of vectors and linear operators while being as efficient as possible. And cherry on the cake, I’d like the source code to be easily readable for someone with a basic knowledge of Krylov methods and Fortran.

1 Like

I ain’t either, actually

Yes

Could be, but would be cumbersome, as you would need a `select type` block not only for `H` but also for `alpha` (and other local variables with variable type). Alternatively you could declare two variables, and use the appropriate one:

``````real(wp) :: alpha_r
complex(wp) :: alpha_c
``````

I’m afraid that the answer is “yes”…

No.

I think it’s code dependant: if polymorphism results in duplicating 90% of the code in `select type` blocks, then there’s no real benefit IMO.

If the amount of computations in a `select type` block is large enough, I guess that the overhead is quite limited.

Thanks a lot ! After looking at what is being done in the linear algebra module of `stdlib`, I’ve decided to go full `fypp` to automatically generate the code for real/complex and single/double precision. It’ll take a bit of time to rewrite the whole thing, but I’m convinced it’ll be beneficial in the long run.

2 Likes

@loiseaujc you might want to take a look at the latest version of my lib GitHub - jalvesz/FSPARSE: A Modern Fortran sparse matrices gallery that I recently upgraded using fypp based on stdlib. I took a subset of the functionality for templating the different matrix types and included a python script for the preprocessing.

That btw, i have an open discussion in stdlib python preprocessor · Issue #791 · fortran-lang/stdlib · GitHub to enhance the use of fypp such that both the `fpm` build and the `CMake` build can have the same flexibility and a single entry point for the preprocessing. Far from finished but already working as a draft, any inputs on this are very welcomed

2 Likes

That’s awesome ! I have just started to play with FYPP and build each file manually for the moment to get the gist of it. I’ll definitely look into what you’ve done. As for the discussion in stdlib, I’ve already bookmarked it

1 Like

I like this and @hkvzjal’s FSPARSE approach: pre-processing is the most productive usage of your code right now. Unfortunately, the current Fortran standard does not allow any smart generics coding with number types, because the `class(*)` approach is really mainly useful for vertical inheritance, but not as much for horizontal polymorphism (plus, it is resolved at runtime hence slow, if repeated many times). Some including myself have been hoping to see number-centered generics features such as with `real(*)` or `numeric(*)` (sounds Fortrannic, does it?) being considered for an upcoming language proposal, but that’s not going to happen anytime soon.

1 Like