So I’m currently working on creating a routine that does something quite similar MPI_Scatter
, in that a single task has some field that we want to send portions of to other processes within a communicator. Lets call the source fld global_fld
, and the resulting, split field split_fld
, and I want to call it like:
call split_global_fld(global_fld, split_fld)
noting that global_fld
will really only have valid data on the root task, but the routine needs to be called by participating tasks.
This brings me to my question, what is a good way to setup this interface? My first idea was to set it up as follows:
subroutine split_global_fld(global_fld, split_fld)
implicit none
real, allocatable, dimension(:,:), intent(in ) :: global_fld ! should only be allocated on root task
real, allocatable, dimension(:,:), intent( out) :: split_fld ! will be allocated on all tasks
...
! perform necessary MPI communication to split array
and then in the calling routine or program, it would look something like
real, allocatable, dimension(:,:) :: global_fld
real, allocatable, dimension(:,:) :: split_fld
...
if ( myrank == main ) then
call populate_global_fld(global_fld) ! this returns an allocated array
endif
...
! split data across leaders
call split_global_fld(global_fld, split_fld) ! noting that global_fld isn't allocated on non root tasks
In some early tests, this seems to nominally work, but it feels odd and my colleague is worried that the behaviour here is undefined and the behaviour you actually get may be compiler dependent. Is this correct? Even if it isn’t, is this bad practice?
If so, I was wondering how others might try to set this up? Another solution would be to have something like this in the calling program:
if ( myrank == main ) then
call populate_global_fld(global_fld)
else
! spoof field to satisfy interfaces
allocate(global_fld(1,1))
global_fld(:,:) = 0
end if
and getting rid of the allocatable
key-word in the interface specification.