I would say that assumed shape dummy arguments are both simpler and more flexible than explicit shape array dummy arguments. When the functionality does overlap, say for contiguous actual arguments, then the assumed shape dummy arguments are also less error prone. With modern fortran, the only situations where explicit shape arrays have an advantage is 1) when the programmer wants to change dimensions and/or ranks for the dummy argument, or 2) when the programmer knows that a contiguous actual argument is always associated with a contiguous dummy argument (i.e. the f77 subset).
Is this a situation where LFortran cannot compile standard-conforming code without specifying an option? I always prefer the other approach, where standard conforming code compiles without options, and then options are used to invoke nonstandard behavior.
I think recursion works just as easily, or often even more so, with modern fortran constructs such as assumed shape arrays. Also note that in the case of LAPACK, strides are only provided for the level-1 operations (using the INCX and INCY types of arguments). All of the matrix arguments in the level-2 and level-3 operations require that the leading dimension is contiguous. I think the original LAPACK code made that choice just for simplicity, it would have been possible to add also the INCX type arguments for all of the matrix arguments, but the user interface would have been even more complicated. If you can imagine a LAPACK library that fully uses assumed shape arrays, then you would get that functionality for all vector and matrix arguments with no programmer complications at all, the compiler would do all of the heavy lifting.
This has been the problem since f90 introduced array syntax. When an assumed shape actual argument is associated with an explcit shape dummy argument, copy-in/copy-out overhead can occur, and it does so without the programmer having much control over it. The end user sees poor performance, but it is sometimes difficult to see what is the problem and how to fix it. As you say, sometimes this overhead is hidden by all of the computational effort, say in the level-3 blas operations with N**2
data and N**3
floating point effort, but other times it cannot be, e.g. even in a level-2 operation where both the copy overhead and the floating point operation count scale as N**2
.
Isnât what you are talking about here the same as using assumed shape arrays? Those dummy arguments have metadata associated with them that define the rank, bounds, and memory strides.
This is a universal programming problem, finding the right compromises of simplicity, flexibility, and efficiency.
The choice that fortran has made is that if you use assumed shape arrays, then the compiler does the heavy lifting, including all of the array rank matching and stride metadata, but if you use explicit shape arrays, the programmer assumes the responsibility to have consistent array ranks, bounds, sizes, strides, and so on.