I presented the inaugural paper about the Julienne correctness-checking framework at the U.S. Research Software Engineering Conference last month:
Julienne supports expressive idioms inspired by natural language sentences such as
test_diagnosis = x .approximates. y .within. tolerance
where the expression evaluates to an object containing a logical test pass/failure indicator and, if the test fails, a diagnostic character string. Most Julienne operators are elemental so that the above syntax could drive element-wise comparisons of multidimensional arrays with diagnostic messages produced only for failing elements. The same expressions can be used in preprocessor macros that are removed in the absence of the compile-time -DASSERTIONS flag, thus having no impact on execution times in production code:
call_julienne_assert(i .equalsExpected. j)
wherein Julienne exploits the explicit asymmetry implied by the .equalsExpected. operator to automatically construct a diagnostic message indicating which is the expected value and which is actual. Julienne assertions can be invoked even inside pure procedures and thus mitigate against a reason developers frequently cite for not writing pure procedures: an inability to print runtime values inside pure procedures when debugging. The act of printing implies an expectation of what program state is valid. The Jullienne philosophy: express the expectation in an assertion. If the assertion succeeds, no output is needed or given. If the assertion fails, output is obtained at the cost of error termination.
Julienne’s scaffold program accepts a short JSON file naming various test subjects, from which scaffold generates a skeletal test suite, including a test driver program. See Generating Test Scaffolding.
Julienne supports testing parallel programs that use Fortran’s multi-image parallel features. Julienne’s explicit use of multi-image features have a small footprint in the source code (touching a single-digit number of lines in the core library), which makes it likely that only minimal effort would be required to offer optional support for other parallel programming models such as the Message Passing Interface (MPI) if requested or contributed by interested parties.