What are the advantages of Julia over Fortran?

What are the main advantages of Julia over modern Fortran from:

  1. Numeric/computation aspect?
  2. Non-numeric/non-computation aspect?

Thank you very much!

First of all, let me make it clear that I don’t make extensive use of modern fortran so there is a chance I have made a few minor mistakes. If so, please correct me.

Fortran advantages:

  • ease of compilation/distribution
  • difficulty of writing slow code. Fortran is more strict than Julia, and as a result, it is slightly easier for new users to write performant code in Fortran than Julia.

Julia Non-numeric advantages.

  • Ecosystem: Julia has around 6000 packages. FPM has less than 50 (and many of the 50 are things that are in Julia Base/Stdlib eg regex, strings, toml, processses, dates/times, unit testing and hash tables)
  • interoperability: It is common for Julia libraries to work together even when they were not explicitly designed to do so. A great example of this is Plots+DifferentialEquations+IntervalArithmatic which can be used to solve differential equations with robust error bars.
  • macros: Julia’s lisp-like nature makes it really easy to manipulate code (think a Fortran preprocesser but built into the language and better).
    Hardware support: Julia lets you write programs that use Nvidia GPU, AMD GPU, multithreading, and multi-processing really easily. Fortran has do concurrent which requires you to use a different compiler to change how your program uses these types of hardware, and doesn’t currently have support for programs that use multiple types of parallelism. Also, I don’t believe there are modern Fortran compilers that work with AMD GPUs (correct me if I’m wrong)
    Interop: Julia can very easily call code from python, R, C, C++, and Fortran. Fortran can’t currently call Python, R, or Julia.
  • ease of contribution: If you want to change Fortran, you need to pay to be part of the standardization committee, wait 3 to 5 years for a new version of Fortran, and wait another 3 to infinity years for compilers to support the new feature. If you want to change Julia, you can make a pull request on github in an afternoon, and there is a meeting open to public participation every 2 weeks to discuss new features to the language. (also Julia releases a new version every 4-8 months).

Julia numeric advantages

  • performance: most open source Fortran compilers have relatively poor implementations of math functions (eg sin, cos), and do a worse job vectorizing them than Julia + LoopVectorization
  • generic code: Julia makes it really easy to write code that is generic for all element types, ie Float32, BigFloat, Rational etc. This means that it is much easier to write libraries (since you duplicate much less code), and allows for advanced techniques such as autodiff (see below)
  • automatic differentiation: Since most julia functions work with arbitrary types, you can get gradients/jacobians automatically that are extremely efficient. This has led to Julia getting much simpler (and often faster) optimization and differential equation solvers than Fortran.
  • symbolic computation: The same flexibility also allows arbitrary symbolic simplification of Julia code which can lead to major speedups in complicated models.

I’m sure there’s a lot of stuff I’m missing, but I think this gives a pretty good summary.

4 Likes

Thank you @oscardssmith ! I appreciate your elaboration.
The auto differentiation you mentioned in Julia looks particularly interesting.
I guess if I have the money I will invest in exploring auto diff in Fortran :slight_smile:

If I remember correctly, some of the Fortran code written by @jacobwilliams can indeed call some Python stuff from Fortran in some way. E.g.,

I think similar techniques can be used in calling R and Julia from Fortran in some way.

Besides Intel, Nvidia, I think AMD have their version of C/C++/Fortran compiler too (but perhaps not for GPU),

I think Intel’s strong will in GPU market, plus the name and purpose of the free Intel OneAPI and their new IFX, perhaps have already making Fortran running on GPU available.
There is a thread here too about Fortran + GPU programming,

I can be wrong but it seems inevitable that each of the whales like Intel, AMD, Nvidia will keep developing their own distribution of Fortran compiler to particularly optimize for their own hardware. But anyway, for some reason they just do not give up Fortran :slight_smile: I guess they have their reasons, perhaps Fortran is relatively easy to be optimized for hardware from compiler level.

Sure, I agree that someone could please split it to a new thread.

I don’t see any reason Python, R, or Julia code couldn’t be used in Fortran if stdin/stdout and pipes were used on *nix systems. There are certainly efficiency concerns regarding that, which is why it isn’t likely seen in practice, when a domain expert could implement the algorithm in Fortran directly, instead of calling out to a less efficient implementation.

Is there something I’m missing or mistaken about?

the big problem is that it will be about 1000x slower. Julia’s interop often shares memory, and when it doesn’t makes a simple memory copy of the data-structure. This isn’t necessarily a problem for 1 time use cases (like initial data-parsing), but will be cost-prohibitive for many use-cases.

1 Like

So basically Julia embeds another programming language interpreter in its runtime in order to use the libraries. Is that accurate?

No, but the behavior the user sees is similar. What PyCall (for example) is doing under the hood is making C-calls to Python’s C-api (you link an actual python executable when you build the package). Other language wrappers have some differences, but they mostly work in similar ways.

I believe LFortran has plans for similar features, but they don’t yet exist.

1 Like

Do you have any idea on how linking to a Python, R, or Julia binary affects performance relative to linking a direct C/C++ implementation? I’d imagine it an order of magnitude slower, but don’t trust my intuition in this area.

In the thread Fortran calling R I showed a Fortran code writing unformatted stream data that is then processed by R, and @ivanpribec demonstrated a better approach using his Fortran-Rinside package.

3 Likes

The short answer is linking to Python/R gives you roughly the speed of the python/R code. This is typically between 2x slower (if the python/R calls out to C before doing a lot of work) and 100x (if it’s pure python). This is often completely acceptable for non performance-critical code.

Linking to Julia vs linking to C is totally implementation dependent. Julia is as fast as C for equivalent code, but code is rarely equivalent.

1 Like

There seems to be some missing history here. The creators of Julia took FreeBSD libm source code and created OpenLibm. OpenLibm then provided math functions (eg, sin, cos) for Julia. Along the way, OpenLibm/Julia contributors were wiling to accept patches from someone, who worked on improvements to FreeBSD libm, but never seemed to reciprocate. At some point, it seems that Julia contributors took a Not-In-My-House approach and decided to re-implement the math functions.

It should also be noted that the likely most popular open-source Fortran processor runs on dozens of operating systems and CPU architectures. Instead of wasting the open-source Fortran-processor developers’ time, the open-source Fortran-processor developers decided to rely on the libm as specified in the C standard and provided by the operating system.

Can you define what you mean by relatively poor implementations? Where are the superior algorithms, used by Julia, documented?

Fortran Pros:

  • International standard (for more that 70 years).
  • Multiple commercial and open-source vendors to choose from.

Julia Cons:

  • No standard.
  • Vendor lock-in
5 Likes

The basic reason for the switch to special functions implemented in Julia is that they are (in my opinion at least) easier to write, and are easier for the compiler to vectorize. The downside of relying on OS provided LIBM is that (especially on windows) they are a buggy mess (for example, the windows fma implementation produces incorrect results in hard to round cases, and is generally really slow). Also using OS provided LIBM means that different operating systems produce different results.

What I mean by “relatively poor implimentation” is that this thread revealed that idiomatic code in Fortran was slower due to worse implementations of trig functions. In general, algorithms Julia uses are not documented externally, but are tested a ton to make sure they have high accuracy, and the source is easy to read if you want to port them. The speedups generally come from very carefully chosen polynomial coefficients and reductions strategies.

Fpm is still a relatively young effort, so this specific comparison is somewhat of an underestimation. The amount of Fortran libraries is bigger once you factor in codes from Netlib, the Fortran-lang Package Index, vendor libraries like NAG and IMSL, and dozens of other codes scattered throughout the internet.

Julia’s fast growth is impressive by all means, so is the variety and sophistication of some of the flagship packages. We are already seeing Julia drive new trends in computing, so I expect the interaction between the two communities will increase.

However, I will note fast growth does have some cons. The amount of deprecated Julia packages seems to grow monthly. This fast growth and hence instability was a big deterrent when I tested Julia, 6 or 7 years ago. I read the situation has become better now.

4 Likes

I’m familiar with that thread. My conclusion was that micro-benchmarks typically find what the individual went looking for. I also concluded that generalizations are rampant in FD. The failure(s) of Microsoft’s libm is somehow extended to all other libm. Having contributed quite a bit to FreeBSD’s libm, minimax polynomial coefficients are carefully chosen as are computation strategies.

2 Likes

Then why not interface with another math library? For instance i work with the crlibm library as I want functions that consistently give the same results. But there is no reason you couldn’t link against a high-performance library instead (assuming on exists). Writing the interface to crlibm (which is in c) was tedious but straight forward and once done can be completely forgotten about.

There are multiple BLAS/LAPACK implementations out there optimized for different problems/architectures. No reason there shouldn’t be multiple libm’s optimized for different problems/architectures.

1 Like

There are 2 reasons I often use Fortran instead of Julia

  1. I have to try to make fast Julia code. I don’t have to try to make fast Fortran.
  2. I like being able to compile everything at once, instead of JIT compiling over and over and over.
1 Like

Linking against a high quality libm would have been a totally reasonable solution, but Julia’s approach has 2 advantages.

  1. Separate compilation prohibits some forms of optimization.
    If you write a^b for Float64 a and b, and a is constant, Julia can (at least theoretically, I forget whether it currently does) see that this is implimented as exp(log(a)*b) and precompute log(a) (technically it would be precomputing an extended precision version, but the point still stands). Similarly, the compiler can sometimes figure out ways of automatically vectorizing your code that includes special functions which can give a free 2-4x speed boost.
  2. Julia code isn’t a black box to users. Julia has macros such as @edit and @code_native that allow users to look at implementations. Using pure Julia implementations where feasible increases the number of users who look at the code, and therefore lowers barriers for new users who want to learn about numeric methods. If you link statically compiled binaries, you create black boxes that cause knowledge to silo.

I don’t in any way mean to disparage the incredible work that has been put into GLIBC and FreeBSD’s Libm. Both are exceptional pieces of software. However Windows users are either a majority of Julia users, and to the extent possible, we like to give them nice things as well.

Invited FortranCon talk by Leandro Martinez (@lmiq) may be interesting to readers here:

4 Likes

I don’t have the technical knowledge of Oscar and of the other ones here. But from a user perspective I have the feeling that

Julia main advantages are:

  1. packages manager and the API for using numerical algorithms (from linear algebra and optimization, where I know anything about). That is much easier in Julia, even when the underlying code is a Fortran library.

  2. tooling for code analysis and benchmarking

  3. the possibility of benchmarking tiny peaces of code realistically using the repl (this is quite unique to Julia, as it is a combination of the JIT compilation and the tooling). For me this was a game changer, because it allows improving the code in a modular way more safely both in terms of performance and code structure.

  4. code distribution: it is easier to tell someone to install Julia and run a bunch of commands in the Julia repl than to install a compiler, etc. Particularly for non-CS windows users.

  5. Easy to show all numerics and analysis (plots etc) in the same language

Vantages of Fortran:

  1. code distribution: if you want to just distribute small binary, no comparison here. Also very important is the possibility of turning the code into a fast python library. I hope that will change, because I like Julia much more than python, but the user bases are not really comparable now. And while possible, providing a Julia code to be used from python is sort of a workaround, and carries over the Julia compilation overhead.

  2. stability: having a package which is turning 20 and knowing that it will always compile and run with the same instructions is of course very nice. Julia has tools for reproducibility that are very nice, but I can’t expect a new user of my package to deal with these.

  3. better compiler error messages, of course

2 Likes

For those interested in that idea, here are a few links for further study:

Julia’s OpenLibm: https://openlibm.org/
Github clone of crlibm: GitHub - taschini/crlibm: A mirror of the CRLibm project from INRIA Forge
Metalibm GitHub - metalibm/metalibm: Code generation tool to generate mathematical libraries
It is an apparent successor, if the Rust page can be believed.
Last link to metalibm site from archive.org (August 2021):
https://web.archive.org/web/20210827094355/http://www.metalibm.org/

Old GCC discussion of including crlibm as part of the codebase: Uros Bizjak - Using crlibm as the default math library in GCC sources

1 Like