Is the behavior of -standard-semantics expected?

I encountered some unexpected behavior with ifx 2024.2.0 20240602 when using the -standard-semantics flag. Here’s a minimal example to illustrate:

program test
    implicit none
    real a
    a = 0.012345678
    print*, a
    print'(f11.9)', a
    print'(g0)', a
end program

When I compile and run the program with the flag, I get the following output:

ifx -standard-semantics test.f90 && ./a.out

 1.2345678E-02
0.012345678
.1E-01             not expected!

However, without the flag, the output is what I expected:

ifx test.f90 && ./a.out

  1.2345678E-02
0.012345678
.1234568E-01      (7?)

As this flag is used by fpm, it took some time to identify the problem. Is this behavior expected when using this flag?

I recall reading about this before. Here’s some more information from the Intel forum:

1 Like

I remember that and responded with my analysis at the time. I’ve asked one of my Intel contacts to take a look at it.

1 Like

We haven’t gotten to this issue to date. The bug ID is CMPLRLIBS-34487. I am following up with our Runtime Libraries team. For now, you can use option -assume old_e0g0_format (linux) or /assume:old_e0g0_format

1 Like

Use -standard-semanics by default with Intel compilers · Issue #868 · fortran-lang/fpm · GitHub was the main thread in fpm(1). Some of the other discussion there shows this issue and the performance issues were not ever completely addressed. So would a recommendation be that instead of -standard-semantics that a series of -assume options should be used instead; or that fpm(1) should be changed to include -assume old_e0g0_format if -standard-semantics is still included? That is, if fpm(1) tries to encourage standard-conformant coding in general, is the current set of compiler defaults appropriate?

As I see it, a quick PR should add -assume old_e0g0_format (?)

1 Like