What are some applications of sin(x) for x > 1e16?

Have you ever needed trigonometric functions, say, sin(x) for arguments x larger (in absolute value) than, say, 1e16? If so, what was your application?

I am asking from the perspective of their implementations in a compiler, as well as in stdlib for non-intrinsic functions. This is a common issue, see, e.g.:

The hard part is in argument reduction, which for values x > 1e16 requires more than quadruple precision pi value (typically pi/4, but also 2*pi can be used). I am happy to explain why and how it works, but that’s a separate issue. Here I am interested in the user experience. For |x| < 1e16, there exist quite efficient reduction techniques.

From a compiler or stdlib's perspective, we have 4 options for x > 1e16:

  1. Give an accurate answer
  2. Return an increasingly incorrect result (such as half or more of the digits are incorrect, so worse than 1e-8 accuracy for double precision)
  3. Give an error message
  4. Return NaN (which can trigger a NaN trap exception with the proper compiler option if the user wants, effectively becoming 3.)

From a performance perspective, it seems 1., 3. and 4. all require some kind of branching (if statement for x > something). In that case, it seems if branching is required anyway, then let’s do 1. to return the correct answer and be done with it. There is no additional performance penalty (I think), and the function will work for all double precision numbers. And this should be the default.

The option 2. might have a possible advantage for optimizations if it allows branchless computation that can always be efficiently vectorized. If that’s not the case, then I don’t think there is any advantage. But if it is the case, then one practical solution might be to enable this implementation with a compiler option (or as a special name in stdlib), and if enabled the compiler can also insert runtime checks (in Debug mode) to ensure the argument is always smaller than 1e16 (or whatever the threshold).

Do you think that’s a reasonable approach?

But I would still be interested in actual applications. It would greatly help me form a better opinion.

Thanks for this @kargl, I’ll play with it. Yes, I meant runtime. I read the paper “K.C.Ng: ARGUMENT REDUCTION FOR HUGE ARGUMENTS: Good to the Last Bit”. It describes the canonical implementations that most compilers are using.

Your example shows roughly 10x slowdown of 1e34 compared to 100. That is my experience also. I am not aware of a faster implementation for the large arguments.

For reference, here’s the Julia version. julia/rem_pio2.jl at master · JuliaLang/julia · GitHub
TLDR is that we use custom code for x<9pi/4, Cody Waite, for x<2^20, and Payne Hanek for bigger x.

1 Like

Thanks @kargl and @oscardssmith for the links. Does either of you know why this argument reduction returns the remainder as double double (effectively quadruple precision)? For example, in FreeBSD’s sin kernel: k_sin.c « src « msun « lib - src - FreeBSD source tree, the double double become the x and y arguments.

For comparison, we tested the following kernel: https://gitlab.com/lfortran/lfortran/-/blob/22ebdd54d62b86352564b9061e03f49e4339553d/src/runtime/pure/lfortran_intrinsic_trig.f90#L70, and it seems very comparable in terms of accuracy to the k_sin.c kernel, but only requires a double precision input, not double double (quadruple precision).

I know the libm sin implementation is accurate up to 1 ULP, while the above kernel might be a few more ULP. Is it these last bits of precision that is the reason?

Julia’s sin for Float64 is accurate to around .68 ULP, and any polynomial kernel will have some inaccuracy, so if you want to keep accuracy that high, you need it.

We don’t currently have a special @fastmath sin(::Float64), but we definitely should.

1 Like

Is your understanding that a simple polynomial kernel will have more inaccuracy than ~0.7 ULP, and so if under 1 ULP on average is required, one needs the double double input from the argument reduction?

Ok, that makes sense.

My other question is: how can you somehow get higher accuracy argument for the kernel, if the initial input to sin(x) is just double precision? How can that happen?

I know that it can happen that a large argument x can happen to be almost an integer multiple of pi/2, so when it is reduced, there is catastrophic cancellation (and that’s why sometimes large number of digits of pi/2 is required). I don’t know if this is related.

The problem isn’t that the polynomial will have more than .7 ULP by itself. The problem is that if you want everything combined to be accurate, you can’t have 2 separate places where you round.

The approach Julia takes is to follow the ieee convention where you assume that the inputs to your function are exact. While this isn’t representative of reality, adopting this convention makes error analysis significantly easier.

Ah, so it’s the combination of first rounding in rem_pio2 to double precision (accurately) and then rounding again (accurately) as part of the polynomial kernel, so the two roundings together can sometimes introduce > 1 ULP errors? Ok, that would make perfect sense.

Indeed, I’ve already been reading it. There are a lot of very interesting details in there! Thanks for the pointers.

Option 4 is yet another argument for block-structured exception handling.

Asking for an actual application is wise. My guess is that a program that is asking for sin(1e16) has a bug somewhere before that statement, and the argument to sine should not be nearly that big.

1 Like

It’s probably good to mention that one solution to this is to have the language implement sinpi(x)=sin(pi*x). This version is much easier to compute numerically, since for big x, you are just rounding to the nearest integer, which can be done exactly without any issues.

Similarly, having the language implement exp2(x)=2^x and log2(x)=log(x)/log(2) specially is a good idea since they tend to be a combination of faster and more accurate.

Thanks Bill. That’s what I was thinking. Also note that the distance between two consecutive double precision floating point numbers at 1e16 is exactly 2.0. So at that point we have about 3 floating point numbers per period of the sin function. So numerically I can’t see how this can be any useful for any kind of integration or ODE or PDE solving. However perhaps there is an application, and that’s why I was asking.

If sinpi(x) can be computed without issues for large x, why cannot we calculate sin(x) = sinpi(x/pi)? The x/pi operation I think does not lose accuracy. However, I can’t see how this can work, it seems too good to be true.

The problem is that the first step of computing sinpi is computing rem(x,1) which amplifies the error in x/pi. To take a concrete example Computing 1.1e18/pi has an error of 10.7 (eps(1.1e18/pi)=64). As such, even if sinpi is accurate, and division is accurate, composing the two together loses accuracy. In this example, sinpi(1.1e18/pi)=0.0 while sin(1.1e18) is actually around 0.82

1 Like

@oscardssmith thanks for the example. Makes sense. I still find it surprising or counter intuitive. But I understand now, in a sense that I can follow the logic.

Nice. I met Paul Zimmermann once, he is a great guy.

I think the way to think about this is that the numerical analysis way of analyzing error is not perfectly correlated with the arguably most intuitive way of analyzing error.

The intuitive way of looking at error is to consider backward error, ie how large must dx be for float(f(x+dx)) to be equal to the true value of f(x). However, backward error has some issues. Specifically, it isn’t always defined, and is often much harder to use in proofs.

As a result, most floating point functions are specified based on their relative forward error. This is just float(f(x))/f(x)-1. The unintuitive thing about this, is that it pretends that their is no uncertainty in the input x values of your functions, which means that errors don’t always compose nicely. The benefits are that it is easier to use in proofs, and it provides a clearer specification of what acceptable results for functions should be.

2 Likes

I can probably think of one example, that is I use a bad unit for the calculation.

Say I actually want to calcuate

sin(100)

where 100 means 100 light year. So like 10^15 meters. So the unit better be light year.

However, if I use a bad unit like femtometer, like 10^-15 meter or something.

So now the sin(100 light year) will become like

sin(100 * 10^15 * 10^15 femtometer)

so it seems sin(10^32) needs to be calculated.

But in such case I will definitely check if there is something wrong from my end first.

However, if someone is just very stubborn and just insist using femtometer to do this calculation then things can become really hard :sweat_smile:

I know sin(100) the 100 should be unitless. But anyway, I just wanted to say that sometimes using a bad unit may result in needing unnecessary precision in calculations.

1 Like

This doesn’t make sense. If you are doing math right, you are taking the sin of a ratio of 2 lengths. Therefore the units don’t matter.

@oscardssmith I know, that thing inside sin should be unitless. Just a sloppy illustration :sweat_smile:, may be using a bad unit + some bad math + bad habbit may result in needing unnecessary precision in calculations, not just sin, can be something else. Otherwise I am not able to think of needing sin(super big number). In reality we always make the numerical calculations unitless first.