Hej al,

For the past few months, I have been working on LightKrylov, a lightweight Fortran library abstracting out many of the details of Krylov-based methods for large-scale linear algebra problems. At its core is the notion of an `abstract_vector`

which is agnostic to the user’s data structure defining what they mean as `vector`

and the possibly underlying parallelization. This abstract type basically looks like

```
type, abstract, public :: abstract_vector
contains
procedure(abstract_scal) , pass(self), deferred, public :: scal
procedure(abstract_axpby), pass(self), deferred, public :: axpby
procedure(abstract_dot) , pass(self), deferred, public :: dot
procedure, pass(self) :: norm
end type abstract_vector
```

with the abstract interfaces defined as

```
abstract interface
subroutine abstract_scal(self, alpha)
import abstract_vector, wp
class(abstract_vector), intent(inout) :: self
real(kind=wp), intent(in) :: alpha
end abstract_scal
subroutine abstract_axpby(self, alpha, vec, beta)
import abstract_vector, wp
class(abstract_vector), intent(inout) :: self
real(kind=wp), intent(in) :: alpha
class(abstract_vector), intent(in) :: vec
real(kind=wp), intent(in) :: beta
end subroutine abstract_axpby
real(kind=wp) function abstract_dot(self, vec) result(alpha)
import abstract_vector, wp
class(abstract_vector), intent(in) :: self, vec
end function abstract_dot
end interface
```

where `wp`

is my working precision. The subroutine `norm`

is defined inside the associated module as

```
real(kind=wp) function norm(self) result(alpha)
class(abstract_vector), intent(in) :: self
alpha = sqrt(self%dot(self))
end function norm
```

Based on the defiintion of `abstract_axpby`

and `abstract_dot`

, it is implied implicitly that this `abstract_vector`

is basically a real-valued vector. This works fine and I’ve been able to link `LightKrylov`

with the MPI-based CFD solver Nek5000 to compute the leading eigenpairs and singular triplets of my linear operators of interest (themselves are extended from an `abstract_linop`

type, billion-dimensional vectors on a few hundreds of processors). For the eigenvectors, using an array of derived `abstract_vectors`

, I’m following the LAPACK convention of storing the real and imaginary parts in the `i`

-th and `i+1`

-th element of the array. So far so good.

The problem I’m facing now is extending the whole thing to complex arithmetic without having too much duplicated code. The easiest for me would be keep `LightKrylov`

as it is and force the user to rewrite their linear algebra problem in \mathbb{C}^n as an equivalent problem in \mathbb{R}^{2n}. This is the approach I’m currently using but I ain’t too happy with it as it puts a lot of burden on the user. In particular, whenever I want to make the left and right eigenvectors bi-diagonal, I need a bit of bookeeping.

Another approach would be to replace everywhere in the code base `abstract_vector`

with `abstract_real_vector`

and define a new type `abstract_complex_vector`

along with writing the associated variants of the subroutines. While this would be fine, there still are some subroutine that should work (ideally without having to write two versions of them) irrespective of whether `abstract_real_vector`

s or `abstract_complex_vector`

s are being passed. I’m not sure if I’m entirely clear, but I was wondering if there is some more “fortranic” ways to handle this problem than duplicating everything for real and complex-value vectors ?