How to solve the same numerical Problem in 7 different Programming Languages

I wasn’t comparing anything. I just wanted to make his code better in terms of readability. I don’t think he needed a whole separate subroutine to do those calculations inside that loop so I condensed it to just a one line statement.

That being said, on the Fortran wiki it is said that double precision can be thought of as a type of real. So my question is what sort of precision for the real should I use ?

Link:
https://fortranwiki.org/fortran/show/Real+precision

https://www.ibm.com/docs/en/xl-c-and-cpp-aix/13.1.0?topic=fortran-corresponding-data-types

I tried your code with LFortran, and just by commenting our the execute_command_line and the open/write/close which we haven’t implemented in the backend yet, it works! I am super happy about it.

I then modified your code to take a little longer to have a more meaningful benchmark:

program simple_num_prob
  implicit none

  integer :: i
  integer,parameter :: tf=25,n=100000000
  real,allocatable :: t(:),u(:)
  real :: dt
  real(kind=8) :: t1,t2
  character(len=*),parameter :: plt_file='simple_num_prob.gp'

  allocate(t(n))
  allocate(u(n))

  ! Assigning step value
  dt=real(tf)/real(n)


  ! Initial u value assigned
  u(1)=1.0E-5


  ! Population of t vector
  do i=1,n
     t(i)=real(i)
  end do

  call cpu_time(t1)

  ! Calculating and populating the function vector
  do i=1,n-1
     u(i+1) = u(i) + (dt*(u(i)*(1.0-u(i))))
  end do

  call cpu_time(t2)

  print *, "Elapsed time",t2-t1
  print *, "u(n) = ", u(n)

end program simple_num_prob

Then I timed it with LFortran and GFortran on my Apple M1 Max:

$ lfortran --fast b.f90
$ ./a.out
Elapsed time     0.37257999999999997
u(n) =  0.861650
$ gfortran -O3 -march=native -ffast-math -funroll-loops b.f90
$ ./a.out
 Elapsed time  0.46585100000000002     
 u(n) =   0.861649990    

So I’ll take these results. :slight_smile:

If anyone of you want to help us with LFortran, we are always looking for new contributors. LFortran is in alpha, but for simple code like this I would say it’s quite close to beta. I think we have a very solid design of the compiler that will allow us to deliver on performance that will not disappoint, among other things. We’ve been making excellent, steady progress in implementing all Fortran features.

P.S. Regarding implicit none, LFortran already does what was proposed above, that is, it will not allow you to do implicit typing (unless you explicitly ask for it either with the implicit statement or a compiler option, which is not implemented yet, but we’ll get it done soon), try this:

program test_implicit_none
print *, i
end program

which prints:

$ lfortran b.f90 
semantic error: Variable 'i' is not declared
 --> b.f90:2:10
  |
2 | print *, i
  |          ^ 'i' is undeclared


Note: if any of the above error or warning messages are not clear or are lacking
context please report it to us (we consider that a bug that needs to be fixed).
8 Likes

I am happy I was able to contribute :slight_smile:

1 Like

I worked with Howard Harrison from 1974 to 1978.
The code he provided in his book was developed on punch cards and paper tape, so did not have an IDE to help with code layout. I do not recall the computer system he used then but most of the code was originally developed in Basic then transferred to Fortran. It was after 1968, but coding was limited to what Fortran was available with the hardware being used.
I think I used an IBM 7040 at Sydney Uni in 1973 when I learnt Watfor FORTRAN 66 via a card punch.
In 74 we went to CDC 6600 with disk storage and teletypes, then in 75 a Prime 300 with CRT screens and a line editor.
I tried to get Howard to remove the arithmetic IF and include logical variables but function was much more important than layout at that time.
There was a much greater emphasis on what the code did, rather than how it complied with a coding style and I am very greatful for what he taught us back then.

1 Like

Oops !! that is not the correct answer ! lots of round-off with large n and small precision.

If you try real(kind=8) … you will get a very different answer “u(n) = 0.99999861122129396”

I have asked what value of “n” should be used for the benchmark or if n should be identified for convergence ? Is this an issue for this exercise ?

You can reduce n if you include the second derivitive as:
u(i+1) = u(i) + dt * du_dt + dt**2 / 2 * d2u_dt2
where du_dt and d2u_dt2 can be derived from info available.

1 Like

If you print out the value of dt*(u(i)*(1.0-u(i))) for i = n-1, you will find that the value being added to u(n) is smaller than machine epsilon times u(n) by a factor of 3, which leads to the unfortunate result u(n+1) = u(n).

It is hard to imagine how much more expensive disk space and the time to generate code was.
Imagine that every 2 000, lines of code was another box of cards you had to carry around and store. Sadly, that is why so many old codes barely contain a comment. For example, taking a fairly significant code and looking at it without comments and blank lines and declarations:

    LINES    BYTES      STATE
  259818  10085004   modern code
  132294   4548588   remove comment lines and blank lines
  113644   3860461   remove declarations

today, the space saved is so trivial it is not worth a thought. If you had to maintain that code on cards or paper tape, and carry the boxes of cards to the computer room and read them through a card reader that periodically jammed, and generate the lines on a slow keypunch one line at a time you did not put in any line that was not necessary. Really is hard to imagine; and as a result a lot of otherwise useful code often does not have a line of documentation included with it – a greater loss in my mind than the lack of type declarations :frowning_with_open_mouth:

Ten more boxes of just declarations!

Yeah, I noticed the use of single precision, which is usually not enough. Here is a double precision benchmark:

program simple_num_prob
  implicit none
  integer :: i
  integer,parameter :: tf=25,n=100000000, dp = kind(0.d0)
  real(dp),allocatable :: t(:),u(:)
  real(dp) :: dt
  real(dp) :: t1,t2
  character(len=*),parameter :: plt_file='simple_num_prob.gp'

  allocate(t(n))
  allocate(u(n))

  ! Assigning step value
  dt=real(tf,dp)/n


  ! Initial u value assigned
  u(1)=1.0E-5_dp


  ! Population of t vector
  do i=1,n
     t(i)=i
  end do

  call cpu_time(t1)

  ! Calculating and populating the function vector
  do i=1,n-1
     u(i+1) = u(i) + (dt*(u(i)*(1-u(i))))
  end do

  call cpu_time(t2)

  print *, "Elapsed time",t2-t1
  print *, "u(n) = ", u(n)

end program simple_num_prob

And results:

$ lfortran --fast a.f90
$ ./a.out
Elapsed time     0.39530999999999999
u(n) =      0.99999861122142053
$ gfortran -O3 -march=native -ffast-math -funroll-loops a.f90
$ ./a.out
 Elapsed time  0.48896099999999998     
 u(n) =   0.99999861122142053     

That’s truly impressive response by lfortran @certik, kudos - really hats off to you

2 Likes

What is the title of the book, please? Is the code available online? I will add it to GitHub - Beliavsky/Fortran-related-books: Books with Fortran code, other than textbooks.

1 Like

The general consensus is for scientific calculations to use native double precision real(real64) and for 3D graphics to use single precision real(real32). I am using the iso_fortran_env constants to define precession in line with modern CPU co-processor numeric types.

1 Like

I never met Harrison. I did attend a FEM seminar at Sydney Uni in about 1982 with my boss on Fortran and FEM. At that stage we were designing 4000 tonne coal bins with horrendous switch loads. I just listened to the conference and at the end my boss said, why did you not tell them what we are doing with the USC code, I looked at him and said, because we make a lot of money redesigning hand designed bins.

I have modernized his code and yes I spent many tortured days removing the arithmetic if and other statements and LOL adding implicit none. I will post it on Intel Fortran forum.

To understand Harrison’s place, there are two beam theories of interest now, “linear” beam theory from Timoshenko, many thousands of books published on this topic and spar theory developed by the British air-force in 1915, there are about 6 critical books and papers on this topic. We now have accelerometers that provide endless highly accurate data, the Timoshenko method fails against this data set, there are simply not enough variables to get a match for the range of frequencies we measure [I have spent months trying to get around this problem, but you cannot given the theory limits], but for the 3D spar theory, you are looking at very large program loops that take days and need the load applied slowly. So the reason people like Timoshenko is simplicity, and that was fine up until about 2000. By 2060 we will have cleaned out the engineers who were trained on old methods in civil engineering.

I should say that @mecej4 provided a lot of the ideas and help to get Harrison’s program to a great state. How he does all the things he does is beyond my imagination.

Harrison, H.B., “Computer Methods in Structural Analysis”, Prentice Hall, New Jersey, 1973.
It is the text book for my undergraduate structural engineering and I think there was a revised edition released a few years later.

1 Like

Thanks. I added

Harrison, Howard B. (1973). Computer methods in structural analysis. Prentice-Hall

Harrison, Howard B. (1990). Structural Analysis and Design: Some Microcomputer Applications, 2nd. ed.. Pergamon

The site of the Transportation Research Board, part of the National Academies of Science, lists 636 papers and books related to FORTRAN (their spelling).

Related to the original topic. Has anyone seen implicit none (type, external) in the wild yet? The external also should be the default! They should never have double-downed on implicit none like that. That was just a terrible decision from the point of view of trying to get new users for Fortran.

Those are real issues, but they do not seem to have any effect on its adoption. Pythonists seem to be happy with the new features.

Yes! Absolutely. As each year goes by, it becomes clearer that Python made the right call. Just as it becomes more clear that Fortran’s “backward compatibility at all costs” is killing the language.

And the worst thing is that Fortran is actually in better shape for dealing with breaking changes since we have compiler flags (something that doesn’t exist in Python). Nothing could be easier than to just require an -I-prefer-1977 flag for people that can’t or won’t make the most trivial changes to their 40 year old code. Instead, the rest of us are made to suffer.

5 Likes

This argument could go either way.

2 Likes

I just don’t buy the argument that we have to be stuck with the mistakes our ancestors until the end of time.

call doesn’t cause bugs. Forgetting implicit none does. Every single new user is bitten by that. Every single new user who learns why they have to type that all over the place thinks “wow, that’s really dumb”. Enough is enough.

I think it’s more important to attract new users than to make absolutely sure that maintainers of legacy code don’t have to make even the most trivial changes to their build system. The legacy codes are going away. When all the legacy code goes away and we don’t have any new users, then Fortran is dead.

7 Likes

It agree that implicit none is an annoyance that can be circumvented rather easily.
Still, I would prefer to simply remove such oddity. As @jacobwilliams wrote, the ability to specify a Fortran version as a compiler flag makes this transition rather easy. This has happened in the past, at least when functions became recursive by default older code stopped working.

1 Like