Bit_size and digits

In my Linux x86_64 Unbuntu 22.04 system, digits(n)+1 == bit_size(n) if n is an integer of any kind, according to these 5 compilers: AOCC flang, g95, gfortran. ifort, ifx.

I know the standard does not require that. My question is: are there any Fortran compilers in which digits(n)+1 /= bit_size(n) ?

Given the definition of digits() for integer numbers, I would say no. digits() returns the number of significant binary digits. Since binary numbers also come with a sign, one bit has to be reserved for that, leading to your equation. It would be different if your n were unsigned. This is of course only valid for integers, for reals the situation is completely different.

Thank you @Arjen. I was wondering whether big vs little endianness or one’s vs two’s complement could matter here. And I’m not aware that unsigned integers exist in Fortran.

No, the only thing that matters here is the presence of a sign bit. The dffierences between one’s and two’s complements concern the interpretation, but there is still a bit reserved for the sign. And the endianness is about the order of the bytes. I can imagine that some clever system wants to reserve more bits, so that you can have unset integers or an equivalent of NaN or infinity but I have never heard of such a system.

And indeed, unsigned integers are definitely not part of the standard. I can imagine that some Fortan compiler has an extension to that effect, but I have never seen that.

The Sun/Oracle Fortran compiler for Linux and Solaris supports unsigned integers.

I think that it should always be true for integers with a radix=2

The exception would be if some machine uses padding bits for integers, or perhaps integers of specific KINDs. I’m not aware of any modern fortran compiler that does that.

Thank you @mecej4 for mentioning the Sun/Oracle compiler. What are digits(n) and
bit_size(n) for each of its signed and unsigned integer n kinds?

STORAGE_SIZE() would change but BIT_SIZE() should not be affected by extra bits that are not described by the integer model of the standard.

I don’t know of any current machines that have integer padding bits, but there have been machines like this in the past. The CDC 6400, 6600, and 7600 machine were like this. They were 60-bit word addressable, but some of their integer operations only worked on 48 bits, effectively having 12 padding bits. So if a modern fortran compiler were implemented on these kinds of machines, storage_size(n) would return 60, but bit_size(n) would return 48. However, there were some operations, such as the bit operations, that worked on the entire 60-bit word, so that might have been inconsistent with the modern language conventions. I think the reason for this was that the floating point format had a 48-bit mantissa, and some of the integer operations used that part of the floating point unit.

IIRC, “the” UNIVAC had 36-bits numbers. It is too long ago for me to remember any details. According to Wikipedia some early models had a 30-bits word size. (But that is an echo from the late 80’s of the previous century)