Yes, I agree.
Fortran language with its ISO IEC standard, multiple vendors with different hardware platforms, diverse user community with such a long legacy of code bases is on a different evolutionary path than Julia.
With Fortran, a key question is whether the language is sufficiently flexible to develop the kind of solutions that help its practitioners achieve perfomant computing goals. In the case of linear algebra
too, it is a case of so near, yet so far. Meaning one can achieve a lot right now but the limitations with language intrinsics as well as generics
imply considerable code duplication and verbosity in library code and perhaps the use of features such as INCLUDE
files which are not appealing to quite a few.
Hopefully Fortran 202Y and the Fortran stdlib
will make it a much richer experience for Fortran practitioners.
Toward the stdlib
effort, the modules being considered for linear algebra
will hopefully include good options for matrix computations that investigate options in the language such as parameterized derived types, defined IO in conjunction with the kind of useful ELEMENTAL
methods such as exp
, log
, etc. that Julia has taken care to offer its users and that it will also provide other performance options as applicable - vectorization, concurrent processing, parallelization, etc.:
module mat_m
! Say this module is part of some Linear Algebra library
type :: mat_t(n,m)
integer, len :: n, m
private
real, public :: mat(n,m)
real :: work(n,m)
contains
private
procedure, pass(dtv) :: write_mat_t
generic, public :: write(formatted) => write_mat_t
end type
generic :: exp => exp_mat_t !<-- overload the intrinsic
contains
elemental function ones(n,m) result(r)
integer, intent(in) :: n,m
type(mat_t(n=n,m=m)) :: r
r%mat = 1.0
end function
elemental function exp_mat_t(x) result(y)
type(mat_t(n=*,m=*)), intent(in) :: x
type(mat_t(n=x%n,m=x%m)) :: y
integer :: i, j, k, fact
integer, parameter :: maxiter = 8
y%mat = 0.
do concurrent ( i=1:size(x%mat, 1), j=1:size(x%mat, 2))
if (i == j) y%mat(i, j) = 1.
end do
fact = 1
y%work = x%mat
do k = 1, maxiter
fact = fact*k
y%mat = y%mat + y%work/fact
y%work = matmul(y%work, x%mat)
end do
end function
subroutine write_mat_t (dtv, lun, iotype, vlist, istat, imsg)
class(mat_t(n=*,m=*)), intent(in) :: dtv
integer, intent(in) :: lun
character (len=*), intent(in) :: iotype
integer, intent(in) :: vlist(:)
integer, intent(out) :: istat
character (len=*), intent(inout) :: imsg
! local variable
integer :: i, j
write (lun, '(i0,"x",i0," Matrix{Float64}")', iostat=istat, iomsg=imsg) dtv%n, dtv%m
if ( istat /= 0 ) return
write(lun,*) new_line("")
do i = 1, dtv%n
write(lun, '(22f15.7)', advance='no', iostat=istat, iomsg=imsg) (dtv%mat(i, j), j=1,dtv%m)
if ( istat /= 0 ) return
write(lun, *) new_line("")
end do
return
end subroutine
end module
program test
use mat_m
real :: a_scalar
real :: vector(4)
type(mat_t(n=2,m=2)) :: A
type(mat_t(n=2,m=2)) :: V(4)
a_scalar = 1.0
vector = [ a_scalar, a_scalar, a_scalar, a_scalar ]
print *, exp( vector )
A = ones( A%n, A%m )
print *, exp( A )
V = [ A, A, A, A ]
print *, exp( V )
end program
Note the main program above in Fortran looks rather similar to the Julia snippet once some effort toward a higher level of abstraction is expended in a library.
But now I set up the above code to work with default REAL
for the underlying representation. To extend it now to other supported floating-point kinds by a processor is explicit code duplication, once for each kind and which can involve added effort to make it portable across platforms. This is among the aspects we can hope the Generics facility in Fortran 202Y will offer a good solution.
C:\temp>ifort /standard-semantics p.f90
Intel(R) Fortran Intel(R) 64 Compiler Classic for applications running on Intel(R) 64, Version 2021.2.0 Build 20210228_000000
Copyright (C) 1985-2021 Intel Corporation. All rights reserved.Microsoft (R) Incremental Linker Version 14.27.29112.0
Copyright (C) Microsoft Corporation. All rights reserved.-out:p.exe
-subsystem:console
p.objC:\temp>p.exe
2.718282 2.718282 2.718282 2.718282
2x2 Matrix{Float64}
4.1936512 3.1936510
3.1936510 4.19365122x2 Matrix{Float64}
4.1936512 3.1936510
3.1936510 4.19365122x2 Matrix{Float64}
4.1936512 3.1936510
3.1936510 4.19365122x2 Matrix{Float64}
4.1936512 3.1936510
3.1936510 4.19365122x2 Matrix{Float64}
4.1936512 3.1936510
3.1936510 4.1936512C:\temp>