I know this is a newbie question, but in my field (oceanography) lots of people says that Python and other languages are better because you can use AD instead of finite differences, and yet I cannot find papers or resources that explain how to compute a derivative in an automatic way without using an actual function but rather an array of points. Does someone have some resources about that?
ad is a source to source translation. it doesnāt just use the list of numbers. The difference between AD and symbolic differentiation is that symbolic differentiation requires flattening program execution, while AD works with more dynamic program flow by following the flow that the primal execution.
- You can look to the this website: www.Autodiff.org - Community Portal on Automatic Differentiation
Although Iām not sure if it is up to date - Wikipedia explains very well the automatic differentiation schemes: Automatic differentiation - Wikipedia
The forward mode is easy to understand.
You have AD schemes in many languages, see the autodiff web page
The Github @certik linked to collects suggestions for J3 to consider in the future - an entry there does not mean J3 is actively discussing it, but itās a handy place to flesh things out before formal consideration.
The following ArXiv survey paper has an informative discussion on the various forms of AD as it pertains to Machine Learning.
That is a great idea, though I believe itās somewhat easier to develop advanced features from currently available features in the language, instead of directly baking them into the compiler.
For Autodiff specifically, OOP based operator overloading would be enough for the users to implement it.
Though it is little bit harder to reuse old code without modifying them to use the new Autodiff derived types, so I would like to gently point out that having generic types wouldāve helped here.
Although using Fypp preprocessor, we can achieve the same result as using generic types, so itās not much of a problem.
And for those who toil in the CFD vineyard and are looking for a Fortran specific approach for computing flux Jacobians using AD, I can also recommend the following arXiv pre-print and the subsequent Journal of Computational Physics article.
See also JCP, Vol 399, (2019) 108942
https://www.sciencedirect.com/science/article/abs/pii/S0021999119306473
@fedebenelli , Iāve made an update of my automatic differentiation library AD_dnSVM and now it is possible to compute g(x,y) from f(x,y).
f = TWO * X**2 * Y
g = deriv(f,ider=1)**2 * X/Y
where, X,Y,f and g are extended dual numbers (with higher derivatives).