FC-Compiler™ is a (free) Calculus-level Compiler that simplifies Tweaking parameters in ones math model. The FortranCalculus (FC) language is for math **modeling**, **simulation**, and **optimization**. FC is based on Automatic Differentiation that simplifies computer code to an absolute minimum; i.e., a mathematical model, constraints, and the objective (function) definition. Minimizing the amount of code allows the user to concentrate on the science or engineering problem at hand.

What type of autodiff does this use? Foward mode, backward mode, something else?

I’ve located the PROSE Calculus Manual in the Internet Archive. Starting on page 2-1 it gives a description of the calculus language (note the actual manual is from 1974), but it calls the approach “symbolic differentiation at a point”. A paper from the same period, Computing in Calculus (1975), is more specific saying

No derivative formulas are derived or stored; rather, derivative values are computed by pseudo-instructions that are interleaved with machine arithmetic.

A few more hints are given in Automatic differentiation in PROSE (1987). The second paragraph on page 4 sounds roughly as a combination of forward and reverse modes.

If that paper still accurate, I’d be very surprised if PROSE is competitive with modern AD frameworks (anything in python, Julia or any other modern language with autodiff libraries). Running AD purely as interpreted code will lead to pretty massive inefficiencies.

Well, CDC (the company producing the machines PROSE ran on) doesn’t exist for more than 30 years so any comparison to modern AD frameworks or languages doesn’t make much sense.

If of any interest, The Evolution of Synthetic Calculus (1982), goes more in-depth. It appears they were using the automatic derivative approach of Wengert (1964). They were also aware of the performance penalty due to interpretation, writing:

In a pure software implementation of synthetic differentiation, at least one order of magnitude in processing speed is sacrificed even in the “best case” scenario. […] The net penalty is greater than 200 to 1 on conventional machines of the IBM 370 class.

What type of autodiff does this use? Foward mode, backward mode, something else?

Hmm, not sure on this either to say not what you are expecting. It uses Operator Overloading for this. I think the team that worked on this referred to Synthetic Calculus. The last activity member of the team that developed Slang (1960s), PROSE (1974-85), and FortranCalculus (1990-present) died in 2019. I’m the last sideline activity person on this line of compilers. I’m 77 and may not be around much longer, so get a free copy of FC ASAP.

I don’t know the other AD frameworks, but I do know how to use PROSE. I taught PROSE in the S.F. silicon valley to Engineers & Scientists and helped them get started solving their problems. I’m an Author of a casebook on industry problems that used PROSE to solve there math problems. My (140 page) casebook is titled:

Engineering Design Optimization using Calculus Level Methods: A Casebook Approach

NASA’s Apollo Space program got TRW developing the first Calculus level compiler for their work in getting to the moon and back. Many tough math problems were involved.

@ivanpribec, PROSE moved to Mexico around 1985 and I don’t know if it is still being used today.

I have been looking at the documentation. The product reminds me of a program I wrote a long time ago, when PCs were still a relative novelty and I had a lot less gray hairs. It was in FORTRAN and read the differential equations to solve from the input file. It also used a graphical library to display the solutions. Fun to use and to develop. The code has been lost in the mist of times and I would probably rewrite it completely if I were to stumble on a copy

Still, back then it served its purpose. It was never as capable as I think FC is, though.

FC not only solves ODEs but allows a user to tweak parameters as their ODEs are being solves for an Optimum solution.

FC has over a 100 man-years in development … way too much for one person to handle.

Yes, I caught some hints of that - my scanning of the documentation was very very cursory (but I have downloaded the installation file). I am trying to find a tutorial, not so much the case book, to get an idea of the various sections in such a program and the syntax involved - can you point me to it?

‘sections’ of the FC program … I have no ideas on that. I’m a user of FC not a developer of FC. But, I worked along side of a few of the developers.

Well, then I will try and get acquainted with it via the examples. Thanks.

I’m having numerical problems with a nested solver. Any ideas how to solve this? Here is an outline of my code:

```
**FIND** Yzero, Zimag, Zreal;
1 in flatDlay; by **Jove**( contrl2);
2 with lower zeroLow;
3 TO MINIMIZE errDlay
```

**Jove** solver is a sequential unconstrained optimization technique applying Newton’s second-order gradient search.

This ‘Find’ stmt. eventually calls following (nested) ‘Find’ stmt.

```
**FIND** gain, pReal, pImag;
1 in transfer; by **Jupiter**( contrl1);
2 with lower poleLow; and uppers poleHi;
3 TO MINIMIZE errTran
```

**Jupiter** solver is a moving exterior truncations penalty-function method applying a Davidon-Fletcher-Powell (DFP) variable-metric search.

This ‘find’ stmt. calls ‘transfer’ routine until ‘errTran’ parameter is minimized. Then execution returns to the first ‘find’ stmt. where it changes it’s parameters (Yzero, Zimag, Zreal) in order to achieve its objective of minimizing ‘errDlay’. This loop may continue for 20+ iteration, but I’m getting an error msg. saying

“*** THE OBJECTIVE FUNCTION BEING OPTIMIZED IS NEITHER AN EXPLICIT NOR AN IMPLICIT FUNCTION OF THE INDEPENDENT VARIABLES.”

But it just finished the 2nd ‘find’ stmt. summary #1 having tweaked parameters gain, pReal, pImag on its 1st loop. Here is part of the summay …

##
LOOP NUMBER … [INITIAL] 7 8

UNKNOWNS

GAIN 2.293703E-03 2.293707E-03 2.293697E-03

PREAL ( 1) 6.259960E-02 6.259913E-02 6.259846E-02

PREAL ( 2) 4.406212E-02 4.406053E-02 4.405971E-02

PREAL ( 3) 5.736741E-02 5.753962E-02 5.743653E-02

PIMAG ( 1) 2.148273E-01 2.148198E-01 2.148219E-01

PIMAG ( 2) 3.752846E-01 3.752705E-01 3.752753E-01

PIMAG ( 3) 9.999969E-01 9.999994E-01 9.999997E-01

OBJECTIVE

ERRTRAN 2.881500E-03 2.881502E-03 2.881502E-03

Another time with different initial values …

LOOP NUMBER … [INITIAL] 1

UNKNOWNS

GAIN 2.141501E-05 2.199075E-03

PREAL ( 1) 2.000000E-01 2.241059E-02

PREAL ( 2) 2.000000E-01 5.400497E-02

PREAL ( 3) 2.000000E-01 6.942999E-02

PIMAG ( 1) 1.000000E-01 4.238228E-01

PIMAG ( 2) 2.000000E-01 3.050316E-01

PIMAG ( 3) 3.000000E-01 1.751917E-01

OBJECTIVE

ERRTRAN 1.840880E-01 1.657471E-03

—END OF LOOP SUMMARY

“*** IMPROPER RANK IN REFERENCE TO A DYNAMIC ARRAY” at a different place in my code … hmmm

Same problem just different initial conditions, but better objective results! Ideas on a fix?

Just a wild guess based on the error message you showed: could it be that the solver ends up in an area where the objective function is flat, at least wrt the independent variables? In that case, a solver would be unable to determine a direction to take.

I do not know the functions that are involved, but I would inspect them to see if anything of the sort might happen. This would include leaving the area of well-defined behaviour. (Currently I am bugged by a report from a user of our software about NaNs that cause the simulation to stop. Such a value might be a good reason for the solver to think the function is flat …)

Yes, it is trying to fit to the shape of a mountain (right side of peak only). So after a few iterations it should be settling in on the 5th, 6th, 7th, etc. digits of accuracy; kind of a stiff equation at this point. I’ve done this before with sequential find stmt. in a loop and it works. But I learned why an engineer will want it in a nested find stmt.

I have tried multiplying the error function by 10,000 in order to magnify the last few digits. It help some but still ends with such error msg.s.

Thanks for your comments