Excessive main memory usage by stdlib_hashmaps routines

I am using the stdlib_hashmaps module in an application that needs a large hash table (wishing for several times 100M entries) and each entry is of a small fixed size; a few default integers for key and data combined. The expected (to me) total memory usage would be several GB, which would pose no problem on my desktop with 64 GB RAM. But my code is using much more RAM than I had expected.
Is there a work-around, like maybe some way to induce the hashmap routines to clear up space?
To clarify things I produced a simple test program that creates a hashmap for n data items, n a power of 2, in which each key uses just one default integer and there are no further data. Here is that program.

program main
use stdlib_hashmaps, only : chaining_hashmap_type
implicit none
integer, parameter :: n = 2**28
call new_map (n)
stop
contains
subroutine new_map (n)
integer, intent (in) :: n
! Produce a hash map containing n items, report the number of entries
type (chaining_hashmap_type) :: map
integer :: i
call map%init()
do i = 0, n-1
call map%map_entry ([i])
end do
write (,) ‘entries:’, map%entries()
return
end subroutine new_map
end program main

With the given value n=2^28 the peak RAM usage is 47.0 GiB (reported by top). It is the same for the chaining_hashmap_type and the open_hashmap_type. But if the hashmap for this instance would have 2^29 slots and each slot uses the minimal 4 bytes for this application then total RAM usage would be only 2 GiB.
Please advise!
(I have noticed the earlier post “Stdlib Hashmap error” by Chuckyvt on this forum in December 2023. That concerns the stack size, not the RAM usage. I work around the stack size issue by using ulimit -s unlimited or a suitable large value for the limit.)

1 Like

Doesn’t chaining_hashmap_type imply a hash table that uses linked lists to manage collisions? This means that the initial array is at whatever size the hashmap decides and each array element contains the head of the linked list that stores actual elements. Thus for 2^28 entries are likely to have lots of linked list elements allocated for them, much greater than 4 bytes per element.