I am trying to modularise my model as much as possible; this helps in understanding the code and allows me to check off aspects that worlk.
Equally, as equations adapt and change over time, it will allow me to update the model relatively easily (I hope).
Yet, does stripping a single subroutine into multiple spearate functions make the program compile into something less efficient?
I envisage that the compiler simply replaces the subroutine call with the relevant code each time, basically recreating the single subroutine in one way or another. Or does it make a memory reference, to which the machine code then has to jump to each time, costing a few cycles in the process?
I have single lines of equation, one function has 20 such equations. Most equations are specialist and complex. I am moving most into their own function, so that I can reference the source, identify all the elements of the function correctly (using meaningful variable names such as Temperature
rather than simply t
or temp
, which could be for temporary). This also means that I now know that this equation works and I need not worry about it in the future (rather than spend ages on a long subroutine, trying to rationalise all the variables and identify, when changes are made, where the error might now be occuring.
And, obviously, I can use the function over and over again.
However, I am interested in the efficiency question; in case my 40 minute long model (for a quick run) might become an hour - thus my long run (3-4 days on a very powerful system) might become 5-10 days or more!