Fortran port of the Bessels.jl library

I’ve started a Fortran port of the Bessels.jl library, which I’m happy to share here:

perazz/fortran-bessels: Fortran port of the Bessels.jl repository (

First of all, let me send a shout out to the author, @heltonmc, that’s a hell of a library, congrats on the polynomial work! (I also did some of that working on combustion chemical kinetics functions, polynomials are great, you can check that out here if you’re interested).

So far, the implementation is stuck to the bessel_j0 function, if anyone is interested to contribute, please let me know!

Second, I’ve set the license to MIT, please do let me know if there are any concerns.


Third, a first comment: accuracy is really spot on, it’s nearly always below 1e-6 (relative), 1e-15 (absolute), except in proximity of abs(x) ~ 21 and abs(x) ~ 9.

CPU time comparison on my local machine is reported for the test routine:

INTRINSIC time used:  120.3125 ns/eval, sum(z)=93.821861287174954
PACKAGE   time used:  275.0000 ns/eval, sum(z)=93.821864980743882
[bessels] 3 test completed: 2 passed, 1 failed.

I guess I’m a little puzzled by this one, but I don’t really want to enter any CPU time comparison fights… but it seems like maybe a full port of this library will only be interesting if the other functions do perform better than this one.

I have made this comment in other threads, but perhaps it should be stated here too. There are two different goals when implementing the fortran intrinsic functions.

One is to give results that are fast with some minimal expectation of accuracy. If the user needs a more accurate result, then he is expected to write and use his own replacement function.

The other is to give results that are as accurate as possible, down to the least significant bit, and as robust as possible (with no unexpected NaNs, overflows, underflows, INFs, memory alignment problems, or memory allocation problems). Then if the user needs a faster result where some of those features are traded away, he is expected to write and use his own replacement function.

In this case, it appears that someone has found a faster way to compute a less accurate result than whatever library function the fortran compiler is using. The ability to do that should not surprise anyone. If instead, someone had found a faster way to compute a more accurate and more robust result, then it would be more interesting.

I agree with you, if one has no need for full 64-bit f.p. accuracy, it’s useful to have faster approximate libraries. That’s what motivated me to test the performance of bessels.jl in fortran.

Maybe you missed it, but this point was already challenged and expanded upon in the previous thread. The library function that Fortran is using does indeed have on average better accuracy bounds, but it also has a significantly longer tail of low accuracy results than the algorithm that Bessels.jl is using.

This trade off is much more subtle, and it’s hard to say what’s ‘better’ in general without discussing a use-case.


1 Like

“The Bessel implementation of the specific Fortran compiler I have tested, …” is a more accurate description. There are more than half a dozen Fortran compilers.

1 Like

Interesting thank you.
Aren’t the two following notations equivalent ways to initialize an array of double parameters?

[real(BK) :: 0.0]



Btw, I was just copying and pasting the exact same numbers from the bessels.jl source, thats why they have so many digits

Ha. That’s a “no” then, pretty clearly stated tbh.

My bad, I never use that but it seemed like it was the best option to avoid pasting _BK after every number parameter (they were many…)

Learn to love the _rk’s. Leaving those out and inadvertently getting single precision numbers in a calculation is one of the top mistakes beginners make! It’s basically a rite of passage (along with inadvertent implicitly typed variables and inadvertent implicitly saved variables).


I think it’s fine to keep all the digits. Especially if the algorithm can also work for real precisions greater than double (eg real128). Even better would be to compute them at compile time with the actual equation (for cases where that is possible…not always the case).

I agree. The Julia source code mentioned some references but going back to them was not worth the effort - it looks like the author computed them in quadruple precision. tbh I think it will be more accurate to leave the very long strings anyways (e.g. that’s what folks in runge kutta and other integrators do) because if you do the whole calculation in lower precision and it’s more than a handful of add/multiplies, the resulting coefficient may be not so accurate…

I wish we were all like you and never make mistakes.


You’re both right . I guess I would expected that the F2018 [real(RK) :: ] thing would have come more in support of the users… I.e., put a type that then applies to all elements of the array… we’ve got assign instead lol!

Well, most of the achievements in computing nowadays seem to be made by going back to single precision (or even half precision) arithmetics. But for combustion, not an option.

1 Like

I think using single precision, or even half precision when it becomes available in compilers, will be useful even in combustion. But those low-precision values will need to be used at just the right places and times in order to speed up the calculations without losing final accuracy.

1 Like

Strictly considering the standard with the general case of real literal constants, the answer is no as others have pointed out.

In the case of the specific constant of zero you show, you will be hard-pressed to find a processor that does not take the approximation of 0.0 with your kind BK as anything but 0.0_bk.

The standard leaves it as processor-dependent the approximation that comes in the intrinsic assignment such as x = 0.0, where x is of real type with kind BK that say is not the default real, and which is then equivalent to x = real( 0.0, kind=BK )

1 Like

Readers may be want to take a look at this suggestion at the J3 Fortran proposals site to alleviate the problem currently with default real which, colloquially stating, is single-precision:

Take a look also at the other thread at this site on NAMESPACEs.

Notice a comment I just added there with a thought I long had which is to employ @certik’s NAMESPACE concept further to give more control to the users to specify “package” wide (aka namespace wide) defaults for various things:

With respect to the original post here toward the proposal for a “proper” NAMESPACE facility in Fortran and given also discussions on this forum (#78) and requests elsewhere regarding the default kind of constants (and perhaps) intrinsics, the thought also is the NAMESPACE to be able to define such defaults and to have all the enclosing modules to “inherit” the defaults:

namespace n
    default real ( kind=selected_real_kind( p=12 ) )
    module m
end namespace
namespace(n) module m
    ! the default kind of real literal constants shall be the one defined in the enclosing module which has a precision of 12
end module

I would add that a very straightforward way to fix this would be to slightly extend the implicit keyword… with something like

implicit double precision(a-c,e-z,constants) 

i.e. that it applies to all real constants or whatever else keyword)

  • we all gotta type that anyways all the time
  • fully backward compatible
  • no need to add even more syntax (namespaces, selected kinds, etc)
  • all compilers mangle that already

Since we can’t get rid of implicit none, why not double down on that and make it better!

Exactly! whatever the syntax, what we all want is the ability to control what’s the default (implicit, fortranly speaking) kind of all otherwise unspecified constants in the program unit.

I don’t get the point, here. Do you mean that the standard deviation of the errors of the Fortran library is lower, but the distribution of the errors have longer tails ? I’m asking because from the histograms I would say the standard deviation of the Julia library is actually the lowest one… But the total number of counts doesn’t seem to be the same between the txo histograms, so there’s something unclear to me here (I’m not even sure what is exactly “eps” here)…

EDIT: OK I missed it was a log vertical scale. Would be good to have the normal scale as well, and the calculated standard deviations of both histograms. I also notice that the Julia distribution is biased toward positive errors (but again to which point is difficult to evaluate on a log scale). The extracted mean values would be useful (and the standard deviations calculated for both an assumed mean at zero and the calculated mean).

1 Like