Allocatable vs adjustable

I was trying to define what size the stack should be on a 64-bit OS.

To remove all these stack problems, I think if you use -march=native, by default, the stack size should be assigned the installed memory size, but not more than 2^(48-1) / omp_get_max_threads()

After all, it is only an address where all unused virtual memory is not even allocated a virtual memory page.

It is a shame 64-bit compiler developers do not think this way and do away with most stack overflow errors. There would definately be some who still try to exceed this limit !

AFAIK the stack size is “decided” by the OS at the moment it loads an executable, it does not depend on the compilation. At least on *nix systems. And the stack space of a process is reserved and exclusive to this process. So you can’t set by default the stack size to the total available memory: you could run only one process like this. What is a mystery to me, however, is how an “unlimited” stack size is managed…

A resource is “unlimited”, not from math (or philosophy) but from the point of view of the computer, so it’s value is most likely an “unsigned huge” (e.g., 0xffffffffffffffff).

The POSIX’s symbolic constant is RLIM_INFINITY, defined in <sys/resouece.h>.

Which probably means the stack will keep growing until either process completion or crash.

This is partly true.
The stack size is “decided” by the linker, but unfortunately is limited by the OS. The linker may be provided by the compiler or by the OS. Present limits in 64-bit OS are just not necessary

The stack “size” sets out a (virtual) address range that is reserved for stack variables. Unfortunately this is far too small in Windows and probably other OS.
In 64-bit, my googling suggests the virtual address range can be 2^48 bytes (64 terabytes), far greater than the physical memory limits of say one terabyte : 2^40 bytes.

The important points are:

  1. that the 2^48 virtual address space is much larger than can be used.
  2. virtual memory pages are only allocated to actual arrays in the program and
  3. physical memory pages are only allocated to actual arrays that are initialised/used.
  4. only the virtual memory page table must be able to support the address range.

“If” you have a 64 GByte stack size, you still only take space that is allocated or used by the program. The stack is much less likely to clash with the heap.
If your arrays in the stack(s) and heap are spread over a terabyte address range, this can all function based on the allocated memory pages required are less than the physical memory pages or the virtual memory storage space.

If you look at the memory address of heap arrays or thread stack arrays in an Ifort OMP program, you can see these are spaced significantly with no issue.
Depending on the address capacity of the virtual memory page table, many gigabyte stacks should not be an issue, or introduce any inefficiency.

By allocating a larger memory address space to the stack, we simply don’t get stack overflow.
Programs that use (say) 100 GBytes of memory should not be limited to a stack size of less than 1 GBytes, but do remain limited to available physical memory pages.

Stack limits in 64 bit OS should be changed, especially in Windows