Thread-parallel execution common blocks / DVODE

Hi all,

I’m using DVODE to solve stiff differential equations within a finite element simulation in Abaqus. Ideally, the code should run in parallel across multiple CPUs for different cases. However, DVODE’s use of common blocks prevents this, as it isn’t thread-safe.

Is there a straightforward way to make DVODE thread-safe? Alternatively, can anyone recommend a good Fortran-based solver for stiff ODEs that doesn’t rely on external libraries and supports parallel execution?

Thanks in advance!

2 Likes

I started this a while back:

I think I removed all the common blocks. But I don’t remember if this was finished or not. If you try it let me know if it works.

2 Likes

In 2013,
G.D. Byrne Illinois Institute of Technology, Chicago, ILL
S. Thompson, Radford University, Radford, VA

released a F90 version of DVODE (and possibly some improvements are made) based on the original F77 DVODE. They put dvode solver in a module, and it no longer have the common block issue you mentioned. I used this this F90 DVODE inside a MPI program, it is fine. Initially this F90 dvode was on Thompson’s Radford University website. However, the link no longer work. Fortunately I have a backup of that website.
As you can see from @jacobwilliams github link, at the bottom of his github webpage, you can find the backup website.

You can download the website, and open the index.html and you will see things like below:

In this webpage,
if you click
VODE_F90 Source Code

Demo Programs
gives you access to that F90 dvode version, and there are examples teach you how to use the F90 DVODE. As mentioned, I used them in my MPI program (No OpenMP) and it works fine.

if you click
Kyle Niemeyer’s OpenMP compatible version of dvode_f90_m.f90
It gives a OpenMP compatible version of dvode.

Of course, @jacobwilliams modernized DVODE may be even better.

————————————–
PS.

For your convenience, I upload the code of the F90 version and its examples, as well as the OpenMP compatible version here.

example1.f90 (3.8 KB)

example2.f90 (3.8 KB)

dvode_f90_m.f90 (716.1 KB)

OpenMP_dvode_f90_m.f90 (698.5 KB)

3 Likes

Thanks a lot to both of you for these answers! I will try to implement it in the near future.

2 Likes

Can you give more details about this MPI program with DVODE? (can you share?) What did you do with it? And with MPI? I am interested in ZVODE with MPI, basically having large ODE system and distributing it.

What the MPI code is doing is described in this paper, https://ascpt.onlinelibrary.wiley.com/doi/full/10.1002/psp4.13113

But I don’t feel that I could share the code to be honest, and actually the code itself is not important here for the DVODE F90 version.
However, I can describe what the code is doing and what the MPI is doing.
It is mostly a Monte Carlo program. So you need a number of samples (say 1000 samples) to calculate the sample average. A sample can be a log-likelihood, which uses a quantity (a predicted value) that is obtained by solving a bunch of ODEs (say 3 ODEs).
What the MPI does it, say we have 10 cores and 1000 samples, I let 1 core calculate 100 samples one by one. As mentioned, each sample requires solving say 3 ODEs. The ODEs solving is done by using the F90 version of DVODE solver.
All I want to say is, in the MPI code I am doing, there is no need to make any changes to that F90 DVODE code. Because as you can see from my description, I am not trying to parallel the ODE solving process.

Of course, if you have a large number of ODEs to solve, say 99999 ODEs, and you want a parallel ODE solver which can utilize all the CPU cores available to solve the 99999 ODEs, then you need a parallel ODE solver. Those DVODE or ZVODE solvers are not such native parallel ODE solvers as far as I know. To make them parallel (if possible), additional works need to be done. I don’t think the DOVDE solver we are discussing here is such a parallel ODE solver.
Perhaps you could create a new topic about parallel ODE solvers, and see if people could give you some recommendations.
Perhaps PVODE

https://journals.sagepub.com/doi/10.1177/109434209901300405

is more relevant with you case? The paper says you may contact Alan Hindmarsh for PVODE code.