This is very interesting discussion. For collaborative codes where many people come and go over the years, I believe it’s very important that the code is organized so that the clashes between developers are minimized. In my experience, the best strategy is to have well-written, documented and organized main routines (main, I/O, globals, parallelization) even with certain aspects fixed in a sort of a protocol, while the developers of particular modules should have freedom to organize their own work as they “like” as long as it fits the global picture/plan.
It is a bit like building a telescope. Once the building is constructed, the size of the dome is fixed, the control room is set in a certain place, the main mirrors and the construction are there, one can let various groups to build their own instruments - they should have freedom to optimize them as they want, but still they have to respect the overall blueprint and they should avoid clashes with other groups doing the same. In my experience, researchers joining already developed code often tend to reinvent the blueprint or to see only their particular module without considering other developers. It leads to a complete mess. It is also my impression, from a small sample, that even worse mess may be created by some IT guys assigned to research teams to optimize the code without fully understanding the purpose of it. As Knuth wisely said: “premature optimization is the root of all evil”.
And one more remark. There is a lot of risk in adopting various existing subroutines and, in many cases, what seems to be a shortcut in the end turns into a major restriction. For example, one could get a nice subroutine for various finite difference formulae. However, in a real code, the real troubles often start with boundary conditions that come with a lot of ambiguity and, if developer does not have full control of the FD implementation, there is a big chance that sooner or later s/he’ll have to rewrite this FD module from the scratch.