I have used DNAD succesfuly in the past to compute Jacobian matrices. Here is the link to a paper with the description. I believe a short introduction can also be found in the PhD thesis of Joshua Hodson:
Numerical Analysis and Spanwise Shape Optimization for Finite Wings of Arbitrary Aspect Ratio, Joshua D. Hodson, Utah State University, 2019
A simple usage example is given in the thesis:
#define real type(dual)
REAL, parameter :: pi=3.1416
real :: r,a
write(*,*) "Enter a radius: "
a = pi*r**2
write(*,*) "Area = ",a
end program Circle
The program is then compiled using
gfortran –Ddnad –Dndv=1 dnad.F90 Circle.F90, which replaces the lower-case
real variables with a variables of
type(dual). Running the program produces the output:
Enter a radius
Area = 78.5400009 31.4159985
We see that on printing the area, the program also prints the derivative vector, which in this example is equal to circumreference, \partial A/\partial r = 2 \pi r. In my application - evaluating small Jacobian matrices of size 9 x 9, the DNAD code proved just as fast as hand-coded Jacobians.
One downside is that the way the code is structured currently requires the user to specify the number of design variables at compile time, meaning it cannot be integrated at multiple places where a different number of design variables would be needed. I’ve been thinking about doing some experiments to combine DNAD with the fypp preprocessor to make it more flexible.
A second code I’ve experimented with is Tapenade, which is an on-line Automatic Differentiation Engine. It can also be installed locally as a set of Java classes. This then allows you to integrate it in a Makefile (I haven’t tried it). Compared to DNAD, which only implements forward mode automatic differentiation (using operator overloading), Tapenade also supports reverse mode differentation. There are however some limitations with respect to which language features are supported for differentation. From scrolling through the documentation I believe that if you stick to “procedural-style” with subroutines and functions operating on the default Fortran types (reals and integers) it should work quite well.
One of the hurdles I’ve met is how to differentiate code which relies upon BLAS or LAPACK. In principle I believe it should be possible to download the reference libraries from netlib, and replace all
real variables with
type(dual) and use DNAD, or run BLAS and LAPACK through Tapenade, but it feels kind of daunting. Recently, a set of reverse mode differentiated BLAS routines was published in TOMS, but I haven’t found time to dig deeper.