Polynomial fitting in old code (special functions in particular)

I have been examining several source files from Netlib regarding special functions - in particular the K0 and I0 Bessel functions. What intrigues me is the use of Chebyshev polynomials to approximate these functions and more precisely how the weights were derived. I have seen no documentation on how these weights were actually derived (apart from some references to the theoretical background). Does anyone know anything about it? I can imagine the general procedure, but not the details.

1 Like

I used a version directly derived from routines derived at Sandia, that used a combination of techniques including Chebyshev polynomials depending on the range that I do not remember every having a problem with. At least one document popped up

Not sure, maybe these are in Netlib somewhere; the ones I used had some references in the
documentation; depending on what you have access to and what has been copied to the web
maybe the papers or searches on the authors names might find something:

     Applied Mathematics Division 2642
     Sandia Laboratories
     P. O. Box 5800
     Albuquerque, New Mexico  87115
     Control Data 6600 Version 5.1, 10 December 1973

         Written by    Ronald D. Halbgewachs, October 1,1971.

This routine calculates the Bessel functions J(X),Y(X), I(X), or K(X)
for doubleprecision arguments and integer orders. Backwards recurrence
techniques are used for the J(X) and I(X) functions except for very
small arguments, where double precision series evaluation is used.
Forward recurrence is used for the Y(X) and K(X) functions with double
precision Chebyshev approximations used for function initialization.
Accuracy is between thirteen and fourteen significant figures.

Computes zero order and first order Bessel functions using series
approximations and then computes Nth order function using recurrence

Recurrence relation and polynomial approximation technique as described by
A.J.M.Hitchcock,"Polynomial Approximations to Bessel Functions of Order
Zero and One and to Related Functions", M.T.A.C., V.11,1957,PP.86-88,
and G.N. Watson, "A Treatise on the Theory of Bessel Functions", Cambridge
University Press, 1958, P. 62

Recurrence relation technique described by H. Goldstein and
R.M. Thaler,"Recurrence Techniques for the Calculation of
Bessel Functions",M.T.A.C.,V.13,PP.102-108 and I.A. Stegun and
M. Abramowitz,"Generation of Bessel Functions on High Speed

Series evaluation is used for small arguments, recurrence techniques are
used for midrange, and Hankel-S Asymptotic Expansion is used for large
arguments. Accuracy is between thirteen and fourteen correct significant
figures except for large arguments where the accuracy eventually falls
to eight figures.

We tweeked them over time, but I have been told by a previous employer they are still in use as they eliminated the variabilities they were seeing when using different compiler’s intrinsics when they tried to switch.

Might not be much, but not seeing any other pointers coming in.

Thanks - it is mainly curiosity (of course), but given the high precision for these weights I imagined a lot of work has gone into them. And as you say, variations between compilers do occur. I thought it might be a worthwhile idea to preserve the techniques that went into them in some way. Not sure what form that would take, but the discussion about salvaging the knowledge incorporated in ancient code made me think.

The link to the old paper shows just that; the extensive testing, documentation, and methodology that went into those routines which appears to be on the edge of being lost makes you pause. Even just the history of routines like the trig functions, which I think everyone just takes for granted as an intrinsic could be a small course in how methods changed as memory became cheaper and faster, word sizes increased, … . Even without any interest in Bessel functions the PDF looks to be a good read as a study as to what can go into developing code that accurately reflects a function; although I was mostly skimming. Hopefully anyone creating an fpm(1) package will keep that in mind and bundle in resources about methodology and verification tests.

All sort of Bessel and other special functions are included in John Burkhardt’s special_functions.f90 which is a modernized version of original F77 code by Zhang and Jin, apparently used by SciPy as specfun.f but I am not sure what algorithm is used to compute K0, I0.
I have given the old F77 version a try (before I found the F90) and compared its BJNDD function to the transformational version of bessel_jn intrinsic function. Both can compute all Jn(x) values for n=0,max. BJNDD returns also first and second derivative as a bonus and makes it almost 3.5 times faster than bessel_jn (the test was done on my macbook, ifort compiler with max=20 and 1e6 different values of x. My first guess is that it is allocation of the returned array in bessel_jn which makes the difference.

The Boost C++ library has a nice description of the Remez method. It has been used to develop some of the approximations used in the Boost Special Functions chapter.

A recent library perhaps of interest is RLIBM. The publications give a quite comprehensive explanation of the methodology used to derive correctly-rounded mini-max polynomial approximations. It doesn’t cover Bessel functions however.

1 Like

I’m general, throwing extra bits at the problem is often the best solution.

Remez is a new name for me - thanks. I am pretty sure I have come across the algorithm by Clenshaw before.

Well, the nearby university library owns a copy, so I have ordered it just now :wink:

Is this link and software helpful?

That does look nice. Thanks.

Remez generally works pretty well even when you evaluate the polynomial at the same precision as the function. Most functions have rapidly converging taylor series, so using horner’s method, the polynomial will typically converge to .5 ULP if you use fused multiply add instructions. That said, calculating the coefficients within the remez algorithm works best if you are using significantly more precision than you are targeting.

You won’t get less than .5 ULP without extended precision, but under 1 ULP is typical, and often it can be as low as around .55 ULP.

If you can’t find Ralston & Rabinowitz, look for “Computer Approximations” by Hart, Chaney, Lawson, Meztenyi, Thacher and Witzgall. “Special Functions” by Nico Temme has a very good description of why 2nd-order recurrences, for example for Bessel functions, are unstable in one direction and stable in the other. Remember that J_n and Y_n satisfy the same 2nd-order recurrence. Recur forward and you get Y_n, backward and you get J_n.