What global minimization/maximization algorithms do you use?

If the function (either analytic or numerical) “behaves” (meaning it is differentiable in a given interval,) I normally use an algorithm I found in this paper (and others by the same authors, but this is the first and probably the most detailed one; a freely downladable version can be found here.) They describe the theory behind their algorithm in detail, but if you are in a hurry they also give you a pseudo-code (Pascal-like) which is easy to follow and easy to implement in Fortran - I did that years ago and use since then. The algorithm is able to find all the roots (or extrema) of a function within a user-specified interval - not just some roots/extrema, all of them.

I think the idea behind that algorithm is very clever: if you can, somehow, find the total number of roots/extrema within a given interval, then you can easily find those roots/extrema as well; all you need is just how many of them are located in the interval. Using Kronecker-Picard theory, computation of the total number of roots or extrema is deduced to a definite integral (I use QUADPACK for the integration itself but you can use whatever else; it’s not a “badly-behaving” integral.) Once you have that number, computing the roots/extrema is trivial and fast: divide and conquer.

The neat thing about this approach is that it shouts “I am highly parallelizable” from afar (see, e.g., this paper.) Furthermore, the method works very well with numerical functions (for example, a cubic spline based on computed data.) I used such an algorithm to compute whatever I needed for a specific task: from simple roots, maxima or minima to eigenvalues/eigenfrequencies.

The drawback is what you probably noticed right from the beginning: the function must be continuously differentiable (at least two times) in the interval you are looking for al roots or extrema.

3 Likes