Fortran port of the Bessels.jl library

I’ve started a Fortran port of the Bessels.jl library, which I’m happy to share here:

perazz/fortran-bessels: Fortran port of the Bessels.jl repository (github.com)

First of all, let me send a shout out to the author, @heltonmc, that’s a hell of a library, congrats on the polynomial work! (I also did some of that working on combustion chemical kinetics functions, polynomials are great, you can check that out here if you’re interested).

So far, the implementation is stuck to the bessel_j0 function, if anyone is interested to contribute, please let me know!

Second, I’ve set the license to MIT, please do let me know if there are any concerns.

10 Likes

Third, a first comment: accuracy is really spot on, it’s nearly always below 1e-6 (relative), 1e-15 (absolute), except in proximity of abs(x) ~ 21 and abs(x) ~ 9.

CPU time comparison on my local machine is reported for the test routine:

INTRINSIC time used:  120.3125 ns/eval, sum(z)=93.821861287174954
PACKAGE   time used:  275.0000 ns/eval, sum(z)=93.821864980743882
[bessels] 3 test completed: 2 passed, 1 failed.
STOP -1

I guess I’m a little puzzled by this one, but I don’t really want to enter any CPU time comparison fights… but it seems like maybe a full port of this library will only be interesting if the other functions do perform better than this one.

I have made this comment in other threads, but perhaps it should be stated here too. There are two different goals when implementing the fortran intrinsic functions.

One is to give results that are fast with some minimal expectation of accuracy. If the user needs a more accurate result, then he is expected to write and use his own replacement function.

The other is to give results that are as accurate as possible, down to the least significant bit, and as robust as possible (with no unexpected NaNs, overflows, underflows, INFs, memory alignment problems, or memory allocation problems). Then if the user needs a faster result where some of those features are traded away, he is expected to write and use his own replacement function.

In this case, it appears that someone has found a faster way to compute a less accurate result than whatever library function the fortran compiler is using. The ability to do that should not surprise anyone. If instead, someone had found a faster way to compute a more accurate and more robust result, then it would be more interesting.

I agree with you, if one has no need for full 64-bit f.p. accuracy, it’s useful to have faster approximate libraries. That’s what motivated me to test the performance of bessels.jl in fortran.

Took a quick peek. SQPIO2 and nearby entities have ridiculously long decimal representations. You need at most 21 digits for double precision. All of the numerical constants need the _BK kind suffix, i.e.,

real(BK), parameter :: J0_ROOTS(2,16) = reshape([real(BK) :: &
              2.4048255576957730, -1.176691651530894e-16, &
              3.8317059702075125, -1.5269184090088067e-16, &
             5.5200781102863110, 8.088597146146722e-17, &

These are default real kind quantities, which may differ from REAL(BK)

Maybe you missed it, but this point was already challenged and expanded upon in the previous thread. The library function that Fortran is using does indeed have on average better accuracy bounds, but it also has a significantly longer tail of low accuracy results than the algorithm that Bessels.jl is using.

This trade off is much more subtle, and it’s hard to say what’s ‘better’ in general without discussing a use-case.

acc

1 Like

“The Bessel implementation of the specific Fortran compiler I have tested, …” is a more accurate description. There are more than half a dozen Fortran compilers.

1 Like

Interesting thank you.
Aren’t the two following notations equivalent ways to initialize an array of double parameters?

[real(BK) :: 0.0]

and

[0.0_BK]

Btw, I was just copying and pasting the exact same numbers from the bessels.jl source, thats why they have so many digits

AFAIK, no. From F2018, p. 89,

If type-spec appears, it specifies the declared type and type parameters
of the array constructor.  Each ac-value expression in the array-constructor
shall be compatible with intrinsic assignment to a variable of this type and
type parameters. Each value is converted to the type and type parameters
of the array-constructor in accordance with the rules of intrinsic assignment
(10.2.1.3).

The way I read this is that in [real :: 1, 3, 5] the 1, 3, and 5 are of type integer
and these are then converted to real.

In any case, I always error on the safe side and add the kind suffix to real
constants.

Ha. That’s a “no” then, pretty clearly stated tbh.

My bad, I never use that but it seemed like it was the best option to avoid pasting _BK after every number parameter (they were many…)

Learn to love the _rk’s. Leaving those out and inadvertently getting single precision numbers in a calculation is one of the top mistakes beginners make! It’s basically a rite of passage (along with inadvertent implicitly typed variables and inadvertent implicitly saved variables).

4 Likes

I think it’s fine to keep all the digits. Especially if the algorithm can also work for real precisions greater than double (eg real128). Even better would be to compute them at compile time with the actual equation (for cases where that is possible…not always the case).

I agree. The Julia source code mentioned some references but going back to them was not worth the effort - it looks like the author computed them in quadruple precision. tbh I think it will be more accurate to leave the very long strings anyways (e.g. that’s what folks in runge kutta and other integrators do) because if you do the whole calculation in lower precision and it’s more than a handful of add/multiplies, the resulting coefficient may be not so accurate…

It is well above quadruple precision. You only need 36 digits (some might claim 37) to get the 113 bits of quad precision. There are 79 digits in those strings, which is likely good for a 256-bit floating point number.

It’s not a rite of passage. It is an indication that someone forgot to learn about the language that someone is programming in.

I wish we were all like you and never make mistakes.

4 Likes

It is tough being me. But, let’s be real. In C, you have float, double, and long double. Literal constants are 1.3F, 1.3, and 1.3L. In Fortran, one needs to write 1.3 or 1.3_sp, 1.3_dp, and 1.3_xp with appropriate definitions of sp, dp, and xp. It does not matter what language one uses. What is important is to actually learn how to use the language. Not an earth shattering concept.

1 Like

You’re both right . I guess I would expected that the F2018 [real(RK) :: ] thing would have come more in support of the users… I.e., put a type that then applies to all elements of the array… we’ve got assign instead lol!

I’m not privy to the minds of J3, but I suspect the choice was to not introduce ambiguity on whether 1.3 is single precision or double precision or some other precision. For consistency sake (and apparently the irritation of a new generation), 1.3 is single precision in Fortran.

1 Like

Well, most of the achievements in computing nowadays seem to be made by going back to single precision (or even half precision) arithmetics. But for combustion, not an option.

1 Like