Language Marketing for Julia compared to Fortran

@lmiq I am afraid you did not get the point. Of course, the critical aspect to move Fortran forward is to have all @certik mentioned below. Without that, this discussion doesn’t have any sense.

However, I am not sure this is not a zero-sum game. I think we could act in parallel in a marketing initiative. I don’t know how the answer, but I know we can think it together.

1 Like

I don’t think that marketing does any bad. But I don’t think either that it is the marketing that makes these new languages popular. These languages have something to offer, and the way to move forward is understand what is that, understand the tradeoffs associated to these new features, see if Fortran can and/or wants to incorporate them or offer something better. Thus, my point was that this sentiment that it is the marketing, or the corporate backup, the only reasons for the popularity of these alternatives, is wrong, and with that perspective we cannot learn and advance.

5 Likes

I’m in 100% agreement that the reason people move to other languages has less to do with marketing than the newer languages offering features that provide a major advantage in terms of development costs over Fortran which appears to be in a perpetual state of “catch up”. Even when we get new features like OOP they arrive (in my humble opinion) half baked, poorly thought out and then poorly implemented. A lot of that could have been avoided if the standards committee’s had just taken the time to find out what features in other languages are attracting people to move from Fortran and why people want to use them. I’ve long advocated that prior to even considering a new feature (like templates) the committee actually study how they are used in real world applications. If the committee’s of the past (and present) would actually look at modern Finite Element codes, CFD codes, Climate codes etc. written in C++, Julia, Python etc. and survey what features not present in Fortran are actually being used I think they would change their priorities. If this had been done prior to Fortran 2003 I think we would have had generics/metaprogramming before OOP because in my experience things like inheritance and dynamic polymorphism take a back seat to generics. Again take a look at modern Finite Element or CFD codes written in C++ and you will see a lot of templates and a lot of use of STL but not a lot of inheritance.

15 Likes

@rwmsu I agree. I think the short answer is that the committee needs our help. So I and others have joined the committee and if you have time, I recommend to do that also. You can also help us with just the generics: GitHub - j3-fortran/generics (if you think it’s a priority). See also Kickstarting proposals for F202Y features.

… I think they would change their priorities

Please let us know what you think the priorities should be.

I think partly the issue is that everybody is busy, and so the committee work is done by a few who are able to do it. This would be fixed if you and others here could join and help.

The other issue is that we do not always have an agreement what should be done, neither here in this forum, nor at the committee. That we can fix by discussing what the priorities should be. So if you can give us feedback, that would be helpful.

9 Likes

Shall we make a pool with proposals for F202y features?

1 Like

As an aside, Bob is an active member on the Fortran standards committee. His understanding of the subtleties of Fortran and its implementation is amazing. We are lucky to have his involvement.

3 Likes

One of the things I’m hoping to do in the near future is compile a list of the standards “programmer’s responsibilities” (the “shall” clauses in the standard, typically), and the list of processor-dependent features. (Just FYI, Annex A in the standard lists the processor dependencies.)

1 Like

Thanks for the data. I’m retired now, so my time is worth nothing. :slight_smile:

It almost seems sad that the compiler is responsible for determining no more than 50% of the language violations…

4 Likes

We can narrow that down a bit.

T:\>pdfgrep "C[1-9][0-9]* " 22-007r1.pdf > constr22b.txt
T:\>grep "^ *[1-9][0-9]*  *C[1-9][0-9]* " constr22b.txt | wc
    663   10681   70120

Thus, the number of constraints is about 1 per page of the 22-007r1 draft.

As to the "shall"s, 663 are the responsibility of the compiler, leaving 1786 for the programmer to memorize and obey.

1 Like

This came across one of my feeds and I wanted to chime in from a Julia perspective as someone doing “Julia Marketing”. I currently serve part-time as the developer community advocate for Julia. Just to clarify, Julia is an open-source project that is part of the NumFOCUS 501c3 non-profit (the same org that is behind Jupyter, Pandas, NumPy, LFortran, etc) where I am also on the Board.

There is no “marketing” function of the project or really any organized effort to do so beyond people who are authentically excited about the language and ecosystem. I would guess that 99% of all Julia stuff out there is organic and created by enthusiastic users. I myself started as an enthusiastic user while working at NASA and as I got more involved in the language, I wanted to contribute and share what I learned. The good part is that this model can be replicated by other open source projects. Like Fortran, the Julia project has lots of things to overcome from a messaging standpoint. Why would I even use Julia since Python has such a large ecosystem (and 100 other variations of this question)?

As was mentioned by other people in this thread, for Fortran, the challenge is overcoming the perception that it is a “dying” and “old” language. Again, I will posit that we are all in the same boat here, fighting against a perception that is not always true because we have a deep belief that our tool and ecosystem can help people more effectively solve their problems.

I am currently working on a talk for Upstream 2022 specifically about how to have the right technical and behavioral mechanism in place to create advocates for your open source project. I, and surely other people in the Julia community, would always be happy to help advise the Fortran folks on how to effectively get their message out (hopefully in a way that doesn’t rub people the wrong way which it seems some of the Julia “marketing” has). I will be candid and just mention that I don’t really have a good understanding of the Fortran community so this very well might be an ongoing effort, just giving my reaction to this post where it was implied that isn’t the case and offering a hand to help!

15 Likes

Welcome @logankilpatrick, great to have you here!

3 Likes

@logankilpatrick, welcome to this forum and thanks for your comments.

A different approach to comparing programming languages is sometimes more appropriate than selecting just one attribute from, say, speed of execution, the quality of the tools available, the pleasure of using an IDE, the ease of debugging, what language the other members of one’s team wish to use, etc. We could ask, for example, “Here is a programming task that I have to execute. Which of the languages and tools that are available to me is the best choice for this task?” The resulting selected language could be different for different programming tasks.

Since several people have posted about the speeds of Julia and Fortran programs, here is a link that may be of interest. There, we can find source codes and comparisons of run times, for a number of languages and a number of tasks of medium size. There are additional pages at the same site where comparisons are made over a larger family of languages including C, Rust, Java, and so on.

My impression from the results at that site is that Julia and Fortran are worthy competitors. The Fortran source codes, however, could be made a bit more modern, with fewer DO loops. For instance, here is my go at a Fortran program to perform the Spectral Norm benchmark. I hope that some readers will find the code easier to read and understand than, say, the C version posted at the benchmark site.

program spnormex
!https://benchmarksgame-team.pages.debian.net/benchmarksgame/fastest/julia-ifc.html
!Spectral norm example
use BLAS95
implicit none
integer, parameter :: dp = kind(0d0)
real(dp), allocatable :: u(:), v(:), A(:,:)
integer i, j, n
character(len=6) :: arg1

call get_command_argument(1,arg1)
read(arg1, *)n
allocate(u(n), v(n), A(n,n))

do j = 1, n
   A(1:n,j) = 1d0/[((i+j-1)*(i+j-2)/2+i, i=1,n)]
end do

u = 1d0
do i = 1, 10
   call MultAtAv(u,v)
   call MultAtAv(v,u)
end do

print '(A,F11.9)',' Computed result: ', &
   sqrt(dot_product(u,v)/dot_product(v,v))
print *,'Expected result: 1.274224153'

contains

subroutine MultAtAv(v,AtAv)
   implicit none
   real(dp) :: v(:), AtAv(:), u(size(v))
   call gemv(A,v,u)
   call gemv(A,u,AtAv,trans='T')
   return
end subroutine MultAtAv

end program
1 Like

@mecej4 can you contribute your code into GitHub - fortran-lang/benchmarks: Fortran benchmarks ? I think your solution is along the lines of what I did about 10 years ago:

and the faster Fortran solution probably would not be allowed, since it precalculates the matrix. So that’s why we have to maintain our own benchmarks.

1 Like

Thanks @logankilpatrick for your comment and welcome! I am looking forward to reading your Upstream 2022 presentation to see how we could improve our messaging. Indeed I think Julia and Fortran are in a similar boat.

One main difference is that Julia started from scratch, while in Fortran we had to (and have to) first consolidate the existing Fortran community that previously had many separate efforts, but did not have a more unified approach. I think we have mostly fixed that, we now have this forum and the Fortran website (https://fortran-lang.org/). I can say for myself it was (and still is) very hard to navigate the fine line of finding the common ground between the various (competing) interests and get us all to sit at one table as “team Fortran”. I think C++ has to deal with similar issues, but I think Julia doesn’t (for example I think there is currently only one compiler for Julia).

7 Likes

Ondřej, I have no objection to someone else using the code as a speed benchmark if they wish, but in reality it makes a poor benchmark.

The code is written to make 10 sweeps, even though the norm has converged after the 2nd iteration. Secondly, we are not measuring the speed of Fortran at all, since the computationally intensive work is done in the BLAS library routines, which may have been compiled from sources that use a mix of Fortran, C and assembler. All that the Fortran code does is set up the arguments to pass to the BLAS and then print out the results.

As to the rules not allowing one to precalculate the matrix, aren’t they silly? The matrix is constant; why on earth would one want to call a function 5500 X 5500 X 10 times, when the calculation can be done inline in a DO loop? Depending on the language and how the code is written, the compiler’s optimizer may do the same kind of “cheating” behind the scenes, anyway!

2 Likes

@mecej4 yes, exactly. That’s why I think this “numerical benchmark” is not a good benchmark. So the only other numerical benchmark in there is the n-body benchmark, which I think actually is not bad. It’s a bit specialized (mostly how quickly you can compute 1/sqrt(x)), but very common for these kinds of simulations. However, Fortran excels in many other numerical simulations, which are not being tested by those benchmarks at all.

2 Likes

I have added nbody6.f90 to GitHub - fortran-lang/benchmarks: Fortran benchmarks . This is a modified version of the older version at the Debian benchmarks site.

The performance results at the Debian site already show Fortran as among the fastest.

An older thread on this forum, Fortran Benchmark, involves the Nbody example.

On my PC, using Ifort OneAPI 2021.5 in Windows 11, my modifications gave a speed gain of 18 percent compared to the older Fortran program, and the new Fortran version is about 13 percent faster than the Julia version on the Debian benchmarks site.

In this version, the special technique of using the X64 SSE instruction RSQRTSS to generate a single precision approximation to 1/sqrt(x), followed by a single iteration of double precision Newton-Raphson polishing the root, which is used in the Julia version of Nbody, is not used since it did not yield better performance than the straightforward calculation of x^(-3/2), for a given x.

This is the first time that I uploaded a file to Github, and what I uploaded may not have ended up in the proper subdirectory. Sorry!

4 Likes

You should also look at a molecular viewer and building package called Molden, mostly written in Fortran and some C… its impressive what libXmu and similar can do… [MOLDEN a visualization program of molecular and electronic structure] (MOLDEN a visualization program of molecular and electronic structure)

4 Likes

Welcome @Jameschums!

Many thanks, because you have given me an idea. We are performing MD simulations of the phases in neutron stars crusts and we will take a look at that library, although in the comment you are replying to I was thinking in 2-d plots for other kind of problems…

Since I signed in for fortran-lang-discourse, some days ago, I have read many old posts and learnt about many things/facts/problems/tools I simply was unaware of, that I am overwhelmed…

3 Likes

Molden has functions for electrostatic plots in 2d and even 3d from energy calcuations on molecules. I am currenlty fiddling with cmsi-smash, a github project to add solvation (pcm) to its rather fast and precise HF, B3LYP(with vwn5) and MP2 codes…it was quite easy to adapt the output to generate xyz files for viewing in Molden. cmsi smash - [GitHub - cmsi/smash: Massively parallel software for quantum chemistry calculations] (GitHub - cmsi/smash: Massively parallel software for quantum chemistry calculations)

2 Likes