Is there a difference under the hood between these two cases? Do they both use dynamic memory allocation? If not, how is memory allocation handled in the first case?
Declare an array b reusing the size of an assumed-shape array a, as in
subroutine arrays(a)
real, dimension(:), intent(in) :: a
real, dimension(size(a)) :: b
!...
end subroutine arrays
Make the second array b allocatable and allocate using the size of a, as in:
subroutine arrays(a)
real, dimension(:), intent(in) :: a
real, dimension(:), allocatable :: b
allocate(b(size(a)))
!...
end subroutine arrays
Assume b never needs to be reallocated.
Does the type of the arrays make any difference, like if b is of character type?
One difference between automatic and allocatable arrays is that the allocate statement lets you set and check a stat specifier to see if the allocation succeeded. There is no such error handling with automatic arrays. There was a thread
In practice, yes. The first is an “automatic array”, and most implementations (by default) “allocate” space for b on the stack. The second version is explicitly allocatable, and allocated so the memory is definitely dynamically allocated (i.e. on the heap). That said, the standard doesn’t really say how local variables (including automatic arrays) are allocated, and in fact some compilers have options to put automatic arrays on the heap.
I previously understood automatic arrays to have dimensions defined by explicit integer dummy argument(s). I suppose with the assumed-shape array arguments, the dimensions are passed silently and therefore are usable for the same purpose.
If you want to go down the rabbit hole in the standard, here are a few excerpts.
automatic data object
nondummy data object (3.42) with a type parameter (3.144.12) or array bound (3.17) that depends on the value of a specification expression (3.128) that is not a constant expression (3.36)
A specification expression is an expression with limitations that make it suitable for use in specifications such as length type parameters (C704) and array bounds (R816, R817). A specification-expr shall be a constant expression unless it is in an interface body (15.4.3.2), the specification part of a subprogram or BLOCK construct, a derived type definition, or the declaration-type-spec of a FUNCTION statement (15.6.2.2).
R1029 specification-expr is scalar-int-expr
C1011 (R1029) The scalar-int-expr shall be a restricted expression.
A restricted expression is an expression in which each operation is intrinsic or defined by a specification function and each primary is
…
(9) a specification inquiry where each designator or argument is
(a) a restricted expression or
(b) a variable that is not an optional dummy argument, and whose properties inquired about are not
(i) dependent on the upper bound of the last dimension of an assumed-size array,
(ii) deferred, or
(iii) defined by an expression that is not a restricted expression,
16.9.194 SIZE (ARRAY [, DIM, KIND]) Description. Size of an array or one extent. Class. Inquiry function.
Basically, there are some restrictions on what the bounds can be, but it’s more than just an integer dummy argument, and size of an assumed shape dummy argument fits the bill.
Now that subroutines have, by default, the recursive attribute, all local arrays and other variables without the save attribute must be allocated dynamically. I think the compiler is free to allocate these local entities on the stack or from the heap. Stack allocation is much more efficient, but the stack size is often limited, so runtime errors can occur. Heap allocation requires more effort, especially if garbage collection is triggered. So there are some practical consequences, but the programmer does not have full control. The rule of thumb is that scalars and automatic arrays are allocated from the stack, and allocatable entities are allocated from the heap, but a compiler can, for example, place allocatable entities on the stack as a low-level optimizartion. As discussed in the previous thread on this topic, the default behavior for a standard conforming fortran program should be that it “just works”. This means no unnecessary runtime errors due to limited stack size, and so on. This would require compilers to either allocate all local arrays on the heap, or to use some trial-and-error approach where stack allocation is first attempted, and then heap allocation used if necessary. If this were actually the common default compiler behavior, then programmers would demand more control so that, for example, some entities could be specified to be allocated on the stack for efficiency and large arrays could be allocated directly on the heap to avoid the runtime overhead of the stack attempt. None of this can be done in a standard way because the fortran standard does not even recognize the concepts of stack and heap allocation.
On the classic Cray systems, the stack(s) actually resided in heap blocks. Generally multiple stack frames could fit in a single heap block. However if upon invocation a procedure required more stack space than was available in the block, an additional heap block was allocated and linked back to the previous one. There was an API to tune things as well.
Some modern compilers have an option to heap allocate automatic arrays when they are greater than some specified size. (E.g., Intels -heap-arrays option.)
I just want people to be super clear on what “no error handling” means in practice.
It means that your compiler may produce code that will crash your program, if you are lucky, or give you wrong results and start WW3 after killing you, if you are not.
My choice would depend on what harms are attached to the program containing the (possibly) automatic array.
If you like playing with the odds, you can also code a local array of constant size and switch between using that or an allocatable based on a cheap scalar integer comparison of the runtime size with the constant size, but beware of heavily recursive subprograms where this would be less effective mitigation.