Thanks for pointing out the last example of matrix exponential acting on each element of a vector of matrices. Fortran’s elemental functions can only be defined on scalars, so to get the same behavior, you need to define a new matrix type, and then define an elemental matrix-exponential function that operates on this type. Then you can extend the generic interface for exp to include matrix exponentials.
Julia seems to work in a similar way, just with different syntax. Under the hood, Julia is using type information to dispatch to different methods for scalar and matrix exponential. It’s the same principle in Fortran, with the difference being that the Matrix type and associated methods come “for free” in Julia, rather than being defined by the user (or a 3rd party library). That’s a substantive difference, of course, especially considering that this is just one example of Julia’s nice support for “abstract linear algebra” at the language level.
I understand that there is a wide range of methods for computing the matrix exponential (see What Is the Matrix Exponential? – Nick Higham). Until we know more, it seems that a single “exp” for matrices might create more trouble than it solves.
You would need something along the lines of this to achieve that. Perhaps there could be a nice library.
module my_library
use, intrinsic :: iso_fortran_env
contains
function my_exp_scalar(x) result(y)
real(real64), intent(in) :: x
real(real64) :: y
print *, 'function my_exp scalar'
y = exp(x)
end function
function my_exp_mat(x) result(y)
real(real64), intent(in) :: x(:, :)
real(real64) :: y(size(x, 1), size(x, 2))
real(real64) :: x_k(size(x, 1), size(x, 2))
integer :: i, j, k, fact
print *, 'function my_exp mat niave'
y = 0.
do concurrent(i=1:size(x, 1), j=1:size(x, 2))
if(i == j) y(i, j) = 1.
end do
fact = 1
x_k = x
do k = 1, 8
fact = fact*k
y = y + x_k/fact
x_k = matmul(x_k, x)
end do
end function
end module
module my_elemental
use, intrinsic :: iso_fortran_env
interface elementwise
module procedure elementwise_scalar
module procedure elementwise_mat
endinterface
type container
real(real64), allocatable :: mat(:, :)
endtype
contains
function elementwise_scalar(func, x) result(y)
interface
function func(x) result(y)
import
real(real64), intent(in) :: x
real(real64) :: y
end function
end interface
real(real64), intent(in) :: x(:)
real(real64) :: y(size(x))
integer :: i
do i = 1, size(x)
y(i) = func(x(i))
end do
end function
function elementwise_mat(func, x) result(y)
interface
function func(x) result(y)
import
real(real64), intent(in) :: x(:, :)
real(real64) :: y(size(x, 1), size(x, 2))
end function
end interface
type(container), intent(in) :: x(:)
type(container) :: y(size(x))
integer :: i
do i = 1, size(x)
y(i)%mat = func(x(i)%mat)
end do
end function
end module
module helpers
use, intrinsic :: iso_fortran_env
contains
function ones(dim1, dim2) result(mat)
integer, intent(in) :: dim1, dim2
real(real64), allocatable :: mat(:, :)
allocate(mat(dim1, dim2))
mat = 1.
end function
subroutine print_mat(mat)
real(real64), intent(in) :: mat(:, :)
integer(int32) :: i, j
write(*, '(i0,"x",i0," Matrix{Float64}")') size(mat, 1), size(mat, 2)
do i = 1, size(mat, 1)
write(*, '(22f15.7)', advance='no')(mat(i, j), j=1, size(mat, 2))
write(*, *)
end do
end subroutine
end module
program test
use, intrinsic :: iso_fortran_env
use my_library
use my_elemental
use helpers
real(real64) :: a
real(real64) :: A_mat(2, 2)
real(real64), allocatable :: v(:)
type(container), allocatable :: V_mat(:), results(:)
integer :: i
a = 1.0
A_mat = ones(2, 2)
print *, my_exp_scalar(a)
call print_mat(my_exp_mat(A_mat))
v = [a, a, a, a]
print *, elementwise(my_exp_scalar, v)
V_mat = [(container(A_mat), i=1, 4)]
results = elementwise(my_exp_mat, V_mat)
do i = 1, 4
call print_mat(results(i)%mat)
end do
end program
Would it be possible, in Fortran, to write a map function, such that
map(f,x)
Computes f for every element of x, being f and x of any type? Maybe working only with pointers internally? (I guess there is no way to it to know the size of the element of x)
There definitely are limits to the generic programming capabilities, but I think one of the issues that is different about Fortran is that you don’t need to use map at all when you use elemental. This is summed up in this post: Map, Filter, Reduce in Fortran 2018 | Milan Curcic. Milan says, "If you write your functions as elemental , what you’d write in some other language as map(f, x) in Fortran 2018 is just f(x)". See also elemental in Fortran Wiki.
I rewrote the code from above and removed my elementwise function and used elemental functions in the library. So now it doesn’t seem totally ridiculous at least.
module containers
use, intrinsic :: iso_fortran_env
type mat
real(real64), allocatable :: mat(:, :)
end type
end module
module my_library
use, intrinsic :: iso_fortran_env
use containers
interface my_exp
module procedure my_exp_scalar
module procedure my_exp_mat
end interface
contains
elemental function my_exp_scalar(x) result(y)
real(real64), intent(in) :: x
real(real64) :: y
y = exp(x)
end function
elemental function my_exp_mat(x) result(y)
type(mat), intent(in) :: x
type(mat) :: y
real(real64) :: x_k(size(x%mat, 1), size(x%mat, 2))
integer :: i, j, k, fact
allocate (y%mat, mold=x%mat)
y%mat = 0.
do concurrent(i=1:size(x%mat, 1), j=1:size(x%mat, 2))
if (i == j) y%mat(i, j) = 1.
end do
fact = 1
x_k = x%mat
do k = 1, 8
fact = fact*k
y%mat = y%mat + x_k/fact
x_k = matmul(x_k, x%mat)
end do
end function
end module
module helpers
use, intrinsic :: iso_fortran_env
contains
function ones(dim1, dim2) result(mat)
integer, intent(in) :: dim1, dim2
real(real64), allocatable :: mat(:, :)
allocate (mat(dim1, dim2))
mat = 1.
end function
subroutine print_mat(mat)
real(real64), intent(in) :: mat(:, :)
integer(int32) :: i, j
write (*, '(i0,"x",i0," Matrix{Float64}")') size(mat, 1), size(mat, 2)
do i = 1, size(mat, 1)
write (*, '(22f15.7)', advance='no') (mat(i, j), j=1, size(mat, 2))
write (*, *)
end do
end subroutine
end module
program test
use, intrinsic :: iso_fortran_env
use my_library
use containers
use helpers
real(real64) :: a
real(real64) :: A_mat(2, 2)
type(mat) :: result
real(real64) :: v(4)
type(mat) :: V_mat(4), results(4)
integer :: i
a = 1.0
A_mat = ones(2, 2)
print *, my_exp(a)
result = my_exp(mat(A_mat))
call print_mat(result%mat)
v = [a, a, a, a]
print *, my_exp(v)
V_mat = [(mat(A_mat), i=1, 4)]
results = my_exp(V_mat)
do i = 1, 4
call print_mat(results(i)%mat)
end do
end program
@implicitall , yes looking at your penultimate post, it was unclear why you had felt the need to include the element-wise functions, particularly considering the post by @nshaffer that included the library “recipe” to be followed.
The motivation was “You can do this with essentially any function in Julia, even if it is function defined in a library”. So the constraint for that piece of code is a non-elemental function defined in a library. The final version is for a library for which you have access to the code or the functions are elemental.
Fortran language with its ISO IEC standard, multiple vendors with different hardware platforms, diverse user community with such a long legacy of code bases is on a different evolutionary path than Julia.
With Fortran, a key question is whether the language is sufficiently flexible to develop the kind of solutions that help its practitioners achieve perfomant computing goals. In the case of linear algebra too, it is a case of so near, yet so far. Meaning one can achieve a lot right now but the limitations with language intrinsics as well as generics imply considerable code duplication and verbosity in library code and perhaps the use of features such as INCLUDE files which are not appealing to quite a few.
Hopefully Fortran 202Y and the Fortran stdlib will make it a much richer experience for Fortran practitioners.
Toward the stdlib effort, the modules being considered for linear algebra will hopefully include good options for matrix computations that investigate options in the language such as parameterized derived types, defined IO in conjunction with the kind of useful ELEMENTAL methods such as exp, log, etc. that Julia has taken care to offer its users and that it will also provide other performance options as applicable - vectorization, concurrent processing, parallelization, etc.:
module mat_m
! Say this module is part of some Linear Algebra library
type :: mat_t(n,m)
integer, len :: n, m
private
real, public :: mat(n,m)
real :: work(n,m)
contains
private
procedure, pass(dtv) :: write_mat_t
generic, public :: write(formatted) => write_mat_t
end type
generic :: exp => exp_mat_t !<-- overload the intrinsic
contains
elemental function ones(n,m) result(r)
integer, intent(in) :: n,m
type(mat_t(n=n,m=m)) :: r
r%mat = 1.0
end function
elemental function exp_mat_t(x) result(y)
type(mat_t(n=*,m=*)), intent(in) :: x
type(mat_t(n=x%n,m=x%m)) :: y
integer :: i, j, k, fact
integer, parameter :: maxiter = 8
y%mat = 0.
do concurrent ( i=1:size(x%mat, 1), j=1:size(x%mat, 2))
if (i == j) y%mat(i, j) = 1.
end do
fact = 1
y%work = x%mat
do k = 1, maxiter
fact = fact*k
y%mat = y%mat + y%work/fact
y%work = matmul(y%work, x%mat)
end do
end function
subroutine write_mat_t (dtv, lun, iotype, vlist, istat, imsg)
class(mat_t(n=*,m=*)), intent(in) :: dtv
integer, intent(in) :: lun
character (len=*), intent(in) :: iotype
integer, intent(in) :: vlist(:)
integer, intent(out) :: istat
character (len=*), intent(inout) :: imsg
! local variable
integer :: i, j
write (lun, '(i0,"x",i0," Matrix{Float64}")', iostat=istat, iomsg=imsg) dtv%n, dtv%m
if ( istat /= 0 ) return
write(lun,*) new_line("")
do i = 1, dtv%n
write(lun, '(22f15.7)', advance='no', iostat=istat, iomsg=imsg) (dtv%mat(i, j), j=1,dtv%m)
if ( istat /= 0 ) return
write(lun, *) new_line("")
end do
return
end subroutine
end module
program test
use mat_m
real :: a_scalar
real :: vector(4)
type(mat_t(n=2,m=2)) :: A
type(mat_t(n=2,m=2)) :: V(4)
a_scalar = 1.0
vector = [ a_scalar, a_scalar, a_scalar, a_scalar ]
print *, exp( vector )
A = ones( A%n, A%m )
print *, exp( A )
V = [ A, A, A, A ]
print *, exp( V )
end program
Note the main program above in Fortran looks rather similar to the Julia snippet once some effort toward a higher level of abstraction is expended in a library.
But now I set up the above code to work with default REAL for the underlying representation. To extend it now to other supported floating-point kinds by a processor is explicit code duplication, once for each kind and which can involve added effort to make it portable across platforms. This is among the aspects we can hope the Generics facility in Fortran 202Y will offer a good solution.
C:\temp>ifort /standard-semantics p.f90
Intel(R) Fortran Intel(R) 64 Compiler Classic for applications running on Intel(R) 64, Version 2021.2.0 Build 20210228_000000
Copyright (C) 1985-2021 Intel Corporation. All rights reserved.
Microsoft (R) Incremental Linker Version 14.27.29112.0
Copyright (C) Microsoft Corporation. All rights reserved.
Indeed. Fortran has long had good options in matrix exponentiation including the work by Robert Sidje at the University of Queensland with the EXPOKIT library: https://www.maths.uq.edu.au/expokit/paper.pdf
And the more recent work by Nicholas Higham (NAG – Nick Higham) on matrix functions including exponentials that Julia also uses and which might be in NAG Library (@themos may have better details) callable in Fortran but perhaps from behind a paywall.
Thus there remain good options that someone packaging up a modern library including with Fortran stdlib might be able to use on the backend.
Things are more easily said than done, which is why I like to attempt to write up some of these examples in code. Now that we’ve agreed that elemental functions are better, it requires that any code you call be pure. If the code is already pure, then I think it’s okay, but it’s a thing where some best practices are being established here.
And I hope in the not-too-distant future most Fortran procedures will be prefix-spec’d as SIMPLE (Fortran 202X). Many procedures in Fortran, especially those toward numerical library procedures, can be or indeed areSIMPLE, such as those for matrix exponentials and other linear algebra computations.
Well, unless I am really missing something important here, this is not a right “rewrite”. Not with k being complex and Y being real. And looking into the main program which was introduced a few posts later, k is truly complex value: (100.0_dp,1.0_dp)
If z=x+i_*y (z complex and x, y real), then exp(i_*z)=exp(i_*(x+i_*y))=exp(i_*x)*exp(-y)
and only then exp(i_*x) part can be computed in real domain using Euler’s formula.
So if we compare Julia computing true complex exponent with Fortran using this rewrite, we are cheating by n**2 exponentiations (and multiplications).
PS. The original formula, with k=(100,1) and a(j)=2*j*pi does not make much sense, as M(i,j) will be dumped to near-zero for most of (i,j).
The Julia community has put a preprint of a paper entitled “Bridging HPC Communities through the Julia Programming Language” on ArXiv and ResearchGate:
To this day, Fortran continues strongly as
a leading programming language for HPC owing
to its legacy of investments and highly optimized
implementations (Kedward et al. 2022).
They didn’t talk much about reproducibility but I think it’s very important. I think it’s one of those things people tend to think of as “just academic bureaucracy” until it comes to bite you. Like just assuming the denominator is not zero because nature wouldn’t allow that.
Reproducible binaries in LFortran is one thing I would like to work on someday.