I have created this issue on my repository and really need help.
Any ideas about how to solve and implement this exercise in fortran?
If you’re writing a simple program with user interaction, you can ask the user to enter a value of ‘z’ that is in [-1,1] range.
With the user input of ‘z’ and x=3, you can calculate ‘m’ - I think a simple Newton method should converge in a few iterations.
Then determine the summation series involving Greek letter ‘xi’ by adding terms and stopping when the increment is less than EPSILON of the floating-point precision being used.
‘lnx’ is yielded by the Taylor’s series expansion you show.
And this the implementation: