Research articles using Fortran

Ahmed Ketata, Zied Driss, “New FORTRAN meanline code for investigating the volute to rotor interspace effect on mixed flow turbine performance”, April 2021, Proceedings of the Institution of Mechanical Engineers Part E Journal of Process Mechanical Engineering, DOI: 10.1177/09544089211007023

I have no access, but the abstract says:

This paper presents an improved validated meanline code, written in the newest object-oriented version of the FORTRAN language, for turbomachinery performance prediction.


It’s my understanding that the convention is to use FORTRAN for F77, and Fortran for F90 and newer, and given that they use object oriented Fortran, they must use at least F2008 or later, so I think they should use the spelling Fortran.


You are right. CRC Press has a forthcoming book

Numerical Recipes in Quantum Information Theory and Quantum Computing: An Adventure in FORTRAN 90
By M.S. Ramkarthik and Payal D. Solanki
This first of a kind textbook provides computational tools in Fortran 90 that are fundamental to quantum information, quantum computing, linear algebra and one dimensional spin half condensed matter systems. Over 160 subroutines are included, and the numerical recipes are aided by detailed flowcharts. Suitable for beginner and advanced readers alike, students and researchers will find this textbook to be a helpful guide and a compendium.

The description get the capitalization right, but the title and table of contents do not. I found the first author’s email on arxiv and emailed him, but it bounced. If anyone has a contact at CRC Press, maybe they can inform them before the book is printed.

More substantively, I wonder if the authors are really restricting themselves to Fortran 90, and if so, why. They could probably replace “FORTRAN 90” with “Fortran 2018” or “Modern Fortran”.

On an unrelated note on how things evolve, CRC used to stand for Chemical Rubber Company, which published handbooks of math formulas and physical data.

1 Like

Gu, Changfeng, Yingquan Peng, et Wenli Lv. « Simulation Software Development for Charge Transport Characteristics in Organic Semiconductors Based on VB and Fortran Mixed Programming ». In Proceedings of 2019 International Conference on Optoelectronics and Measurement , édité par Yingquan Peng et Xinyong Dong, 113‑19. Lecture Notes in Electrical Engineering. Singapore: Springer, 2021.

The introduction says:

Due to programs of Fortran would run faster than any other high-level programming language, Fortran is widely used in engineering calculations for scientific operations. But when it comes to Visual interface design, Fortran has no advantage over other visual programming languages like Visual Basic.

VB graphics function is used to realize the visualization of Fortran calculation program [3]. Write computation code in Fortran to generate DLL (Dynamic Link Library) files, then use VB language to call DLL files to achieve its computation function. The call between Fortran and VB is basically achieved through the file search call, thus achieving hybrid programming.

1 Like

People here will not be surprised by the conclusion, but for people not familiar with compiled languages it can surely be useful:

Pasdar, Amir Hossein, Shahram Azadi, and Reza Kazemi. ‘A Comparative Study on the Efficiency of Compiled Languages and MATLAB/Simulink for Simulation of Highly Nonlinear Automotive Systems’. Journal of Applied and Computational Mechanics , no. Online First (May 2020).


Therefore, a comparison between Simulink, which is frequently used by researchers for simulating automotive systems, and Fortran 2003 as a representative of the compiled languages has been performed. The ease of programming in Fortran, providing legacy support for older codes and its superb adaptation to numerical calculation were the main reasons of choosing it over other similar languages like C and C++. MATLAB-like languages like Java and Python were excluded from the comparison as MATLAB is superior in integrity, support and toolboxes to them. Thus, only MATLAB/Simulink and Fortran 2003 were considered for the test.


This is when a compiled language, especially Fortran as the front-runner in scientific computing, can vastly improve the performance. When it comes to simulating real-world models which are highly detailed and include phase changes in their states, Fortran is able to increase the performance to an unprecedented level. The time spent modeling the system in the compiled language does not considerably differ from the time spent developing a model in Simulink and, furthermore, the final work hour performance value of the Fortran program becomes much lower than the Simulink program which argues the superior productivity of a compiled language for highly nonlinear dynamical systems. Thus, it is reasonable that a compiled language should be considered for modeling and simulating these system types. Especially, if the program has to be repeated for further investigation e.g., designing an optimized controller, a compiled language should be the primary modeling environment as the Simulink program would lead to diminishing results.


I have some Iranian colleagues, and they have told me that at Iranian (engineering) universities MATLAB, Fortran, and Maple are still very common. The Google trends show that Python might be prevailing vs MATLAB (no idea whats with the spike in October 2019?):

I don’t want to downplay the conclusions of the paper or the impact of this journal, but I’ve been told from colleagues, that many Iranian universities require master students publish a paper as part of their degree requirements. If you look at this journal it was established in 2015, and the editorial board still constitutes mainly by Iranian scientists.

Recently, I came across this preprint

Lindner, M., Lincoln, L., Drauschke, F., Koulen, J. M., Würfel, H., Plietzsch, A., & Hellmann, F. (2020). NetworkDynamics. jl–Composing and simulating complex networks in Julia. arXiv preprint arXiv:2012.12696 . [2012.12696] NetworkDynamics.jl -- Composing and simulating complex networks in Julia

where the authors show Julia is faster for network modelling (ODE’s) than Fortran and Scipy. The ODE solver they used is an F77 code from Hairer (Fortran Codes). However the last part of their report states:

This argument also holds for the Fortran version, but only in the case of a very large experimental set - in the benchmark performed in this paper with 300 different combinations of system and initial conditions, the overall time to execute using Fortran is considerably faster than Julia. In the case of our group,the sparse-multiplication Fortran version took more developer time than the Julia version; a trend which we suspect holds given a user equally skilled in both languages. More topically: the Fortran version does not include the abstractions available in our software NetworkDyanmics.jl - a new ODE system described by a new set of equations would require only small modifications to the NetworkDyanmics.jl benchmarking script,but an entire rewrite of the Fortran program. Finally we note that it is possible to fully compile Julia programs to an executable which suffers no JIT cost, largely negating the discussion of startup time and JIT time.

What we can conclude is that if Fortran had good support of sparse matrices, and a good family of ODE solvers, and possibly some tools for automatic generation of Jacobians (SymPy?) it could still remain very competitive. However, given the amount of effort which has gone into the Julia differential equations packages, it might be interesting to just try and call Julia from Fortran (GitHub - brocolis/dynload-julia: Dynamically load Julia from Fortran) rather than duplicate all their work.


Stefano Szaghi has made a great starting point for a modern differential equations package in Fortran:

We just need more people to join the force.

Symbolic programming is something Fortran community needs to discuss. Should we have it in Fortran or we just need a good interface to some trusted library?


I would interface SymEngine, we already started here: GitHub - symengine/symengine.f90: Fortran wrappers of SymEngine


That’s a great initiative. Should be seriously pursued. But I guess, Fortran is always short on hands!

1 Like

It was possible to get in touch with the British customer service to address the question of the capitalization. This e-mail included a link to the list of books in the English edition of Wikipedia as contemporary examples. Without going much into detail, after about a week, the reply received is «In this case, the authors have decided to go with the capitalization.»

Based on the display of other books in Taylor & Francis’ portfolio (Counihan_1996, Kupferschmid_2009, or Ray_2019), there possibly is no company-wide rule.

1 Like

I stumbled upon this paper and library today:

CN-Stream: Open-source library for nonlinear regular waves using stream function theory by Guillaume Ducrozet, Benjamin Bouscasse, Maïté Gouin, Pierre Ferrant, Félicien Bonnefoy. Paper | Code

1 Like

Benchmarking of numerical integration methods for ODE models of biological systems

Scientific Reports volume 11 , Article number: 2696 (2021)


Ordinary differential equation (ODE) models are a key tool to understand complex mechanisms in systems biology. These models are studied using various approaches, including stability and bifurcation analysis, but most frequently by numerical simulations. The number of required simulations is often large, e.g., when unknown parameters need to be inferred. This renders efficient and reliable numerical integration methods essential. However, these methods depend on various hyperparameters, which strongly impact the ODE solution. Despite this, and although hundreds of published ODE models are freely available in public databases, a thorough study that quantifies the impact of hyperparameters on the ODE solver in terms of accuracy and computation time is still missing. In this manuscript, we investigate which choices of algorithms and hyperparameters are generally favorable when dealing with ODE models arising from biological processes. To ensure a representative evaluation, we considered 142 published models. Our study provides evidence that most ODEs in computational biology are stiff, and we give guidelines for the choice of algorithms and hyperparameters. We anticipate that our results will help researchers in systems biology to choose appropriate numerical methods when dealing with ODE models.


Rasmussen, Soren; Gutmann, Ethan D.; Moulitsas, Irene; Filippone, Salvatore. 2021. “Fortran Coarray Implementation of Semi-Lagrangian Convected Air Particles within an Atmospheric Model” ChemEngineering 5, no. 2: 21.

Discussed here. There is a NEC Numeric Library Collection similar to NAG or IMSL. I have noticed on GitHub that many Fortranners write finite element programs for computational fluid dynamics.

International Conference on High Performance Computing
ISC High Performance 2021: High Performance Computing pp 255-271

Evaluation of the NEC Vector Engine for Legacy CFD Codes
Many codes that are still in production use trace their origins to
code developed during the vector supercomputing era from the 1970’s to
1990’s. The recently released NEC Vector Engine (VE) provides an
opportunity to exploit this vector heritage. The VE can provide
state-of-the-art performance without a complete rewrite of a
well-validated codebase. Programs do not require an additional level
of abstraction to use the capabilities of the VE. Given the time and
cost required to port or rewrite codes, this is an attractive
solution. Further tuning as described in this paper can realize
maximum performance.

The goal was to assess how the NEC VE’s performance and ease of use
compare with that of existing CPU architectures (e.g. AMD, Intel)
using a legacy Computational Fluid Dynamics (CFD) solver, FDL3DI
written in Fortran. FDL3DI was originally vectorized and optimized for
efficient operation on vector processing machines. The NEC VE’s
architecture, high memory bandwidth and ability to compile Fortran was
the primary motivation for this evaluation.

Through profiling and modifying the key compute kernels using typical
vector and NEC VE specific optimizations, the code was successfully
able to utilize the vector engine hardware with minimal modification
of the code. Scalar code developed later in FDL3DI’s lifetime was
substituted with vector friendly implementations. With optimizations,
this vector architecture was found to be 3× faster for main-memory
bound problems with the CPU architectures competitive for smaller
problem sizes. This performance using standard well-known techniques
is considered to be a key benefit of this architecture.

Vectorization, CFD, Optimization


Fast modeling of turbulent transport in fusion plasmas using neural networks
Physics of Plasmas 27 , 022310 (2020)

In this work, we have identified and worked around two limitations of the TensorFlow framework at time of writing. First, TensorFlow is most commonly used for deep learning. In deep learning, the amount of training samples vs the size of the network, and thus its evaluation speed, is relatively small. In this work, the networks are shallow and the amount of training samples large. As such, we have implemented a simple but shuffling algorithm using numpy,38 which is factor 2 faster for this application. This results in 1.25× (CPU) to 2× (Tesla P100 GPU) reduction of training time. Second, TensorFlow uses its own proprietary format to save the trained neural network weights and biases to disk. This would mean any integrated framework would need to depend on TensorFlow/python to use the neural network predictions. This is inconvenient and non-performant for many integrated frameworks, especially if they are in MATLAB (RAPTOR) and Fortran (JINTRAC). Instead, we wrote a lightweight communication format using JSON between TensorFlow, and a re-implementation of the network in Fortran with wrappers for Python and MATLAB. Using these MKL accelerated native Fortran functions, the baseline QLKNN model, seven networks with three layers and 128 neurons each for 24 radial points, can be evaluated within 1.4 ms on a single core or 60 ms if the derivatives of the neural network output with respect to the neural network inputs are also evaluated. This can be accelerated to 0.3 ms and 9 ms respectively by parallelizing over 7 cores using MPI. These timings were obtained on a Intel(R) Xeon(R) CPU E5-2665 0 at 2.40 GHz. The FFNN and QLKNN wrappers are freely available at

Papers associated with the GitHub repo REAL (Rapid Earthquake Association and Location)

  1. Zhang, M., W.L. Ellsworth, and G.C. Beroza. Rapid Earthquake Association and Location, Seismol. Res. Lett., 90.6, 2276-2284, 2019, Rapid Earthquake Association and Location | Seismological Research Letters | GeoScienceWorld
  2. Liu M., Zhang M., Zhu W., Ellsworth W. and Li H. Rapid Characterization of the July 2019 Ridgecrest, California Earthquake Sequence from Raw Seismic Data using Machine Learning Phase Picker. Geophysical Research Letters, 47(4), e2019GL086189, 2020,
  3. Wang R., Schmandt B., Zhang M., Glasgow M., Kiser E., Rysanek S. and Stairs R. Injection-induced earthquakes on complex fault zones of the Raton Basin illuminated by machine-learning phase picker and dense nodal array. Geophysical Research Letters, 47(14): e2020GL088168, 2020,
  4. Kissling, E., W.L. Ellsworth, D. Eberhart-Phillips, and U. Kradolfer: Initial reference models in local earthquake tomography, J. Geophys. Res., 99, 19635-19646, 1994,
  5. Waldhauser F. and W.L. Ellsworth, A double-difference earthquake location algorithm: Method and application to the northern Hayward fault, Bull. Seism. Soc. Am., 90, 1353-1368, 2000, Current Links for doi: 10.1785/0120000006

from @Beliavsky

is associated with the paper

Yamaya, L., Mochizuki, K., Akuhara, T., Nishida, K. (2021). Sedimentary structure derived from multi-mode ambient noise tomography with dense OBS network at the Japan Trench. Journal of Geophysical Research: Solid Earth , 126, e2021JB021789.

for the paper

N. Secco, G. K. W. Kenway, P. He, C. A. Mader, and J. R. R. A. Martins, “Efficient Mesh Generation and Deformation for Aerodynamic Shape Optimization”, AIAA Journal, 2021. doi:10.2514/1.J059491

Hello. In this thread which you started, how recent should papers be? I think I will post papers in the current and previous year, so currently 2020 and 2021. There is a GitHub package piclas/ at master · piclas-framework/piclas · GitHub with several papers from 2019 that I am not posting to Discourse.

In my mind, it was for posting new papers. But you can of course post lists of less recent papers if they are valuable for the community.

1 Like