Time-stepping method for cylindrical diffusion equation

I am attempting to solve a diffusion equation for the concentration field u(r,t) in cylindrical (p=1) or spherical (p=2) coordinates,

\frac{\partial u}{\partial t} = \frac{1}{r^p}\frac{\partial}{\partial x} \left(r^p D(u) \frac{\partial u}{\partial x}\right), \qquad r \in [0, R]

After spatial discretization with the Galerkin finite element method, I end up with a system of ODEs of the form,

C \dot{\mathbf u} = A(\mathbf{u}) \mathbf{u} + \mathbf{b}(\mathbf{u})

where C is also known as the capacity or mass matrix, A is the diffusivity/conductivity matrix,
\mathbf{b} is the concentration supply vector (it captures the effect of boundary conditions), \mathbf{u} is the unknown the concentration vector, and \dot{\mathbf{u}} is the time (t) derivative of \mathbf{u}. Due to the nonlinear property relations, such as the diffusivity D(u(x,t)), the mass matrix and supply vector change with time, but we can assume they have been linearized around the previous time step.

For the time discretization, introductory textbooks generally cover the classic methods such as,

These can be combined into a common form known as the generalized trapezoidal method or \theta-method. Since the mass matrix is not diagonal, a matrix problem must be solved even for explicit time discretization methods, like explicit Euler or the standard explicit Runge-Kutta methods.

I’ve also seen three-time-level methods in engineering textbooks such as,

  • fully implicit three-time-level (TTL) scheme (other terms are evaluated at time t_{k + 1}; it may also be blended with the implicit Euler scheme), also known as BDF2:
\dot{\mathbf{u}} \approx \frac{3 \mathbf{u}_{k+1} - 4 \mathbf{u}_k + \mathbf{u}_{k-1}}{2 \Delta t}
  • three-step ad hoc finite difference method using arithmetic mean:
\dot{\mathbf{u}} \approx \frac{1}{2}\left(\frac{\mathbf{u}_{k+1} - \mathbf{u}_k}{\Delta t} + \frac{\mathbf{u}_{k} - \mathbf{u}_{k-1}}{\Delta t}\right) = \frac{\mathbf{u}_{k+1} - \mathbf{u}_{k-1}}{2 \Delta t} \\ \mathbf{u} \approx \frac{1}{3}\left(\mathbf{u}_{k+1} + \mathbf{u}_{k} + \mathbf{u}_{k-1}\right)
  • three-level ad hoc method with improved stability, combining levels t_{k-2}, t_k, and t_{k+1} (other terms evaluated at time t_k):
\dot{\mathbf{u}} \approx \frac{\mathbf{u}_{k+1} - \mathbf{u}^*_{k-1}}{2 \Delta t} \\ \mathbf{u}^*_{k-1} = \frac{1}{2}\left(\mathbf{u}_k + \mathbf{u}_{k-2} \right)

The three-level methods are not self starting, and will typically be started with a few steps of the implicit Euler method. In the form shown above, the methods assume a fixed time-step, meaning no possibility for temporal error control. They are at best second-order accurate, but supposed to have good stability.

The alternative path is to use the method of lines (MOL), and combine the semi-discretized system with a ready-to-use (Fortran) library routine. Since I know the problem becomes stiff for fine grid sizes, here are the integrators I’ve been considering:

Integrator Problem form Jacobian Download Link:
LSODA \dot{y}= f(t,y) Dense, banded ODEPACK | Computing
LSODI M(t,y) \dot{y}= f(t,y) Dense, banded ODEPACK | Computing
VODE \dot{y} = f(t,y) Dense, banded ODEPACK | Computing
MEBDF M \dot{y} = f(t,y) Dense, banded https://www.netlib.org/ode/mebdfdae.f
MEBDFI g(t,y,\dot{y})=0 Dense, banded https://www.netlib.org/ode/mebdfi.f
DASSL g(t,y,\dot{y})=0 Dense, banded https://www.netlib.org/ode/ddassl.f
M3RK \dot{y} = f(t,y) (spectral radius, †) Algorithm 553: M3RK, An Explicit Time Integrator for Semidiscrete Parabolic Equations [D3] | ACM Transactions on Mathematical Software
RKC \dot{y} = f(t,y) (spectral radius, †) https://www.netlib.org/ode/rkc.f
DDEABM \dot{y} = f(t,y) / ddeabm.f
DDEBDF \dot{y} = f(t,y) Dense, banded ddebdf.f
RADAU5 M \dot{y} = f(t,y) Dense, banded Fortran Codes
RADAU M \dot{y} = f(t,y) Dense, banded Fortran Codes
RODAS M \dot{y} = f(t,y) Dense, banded Fortran Codes
SEULEX M \dot{y} = f(t,y) Dense, banded Fortran Codes
CVODE \dot{y} = f(t,y) Generic CVODE | Computing
ARKODE M(t) \dot{y} = f(t,y) Generic ARKODE | Computing
IDA g(t,y,\dot{y})=0 Generic IDA | Computing
SPRINT M(t,y)\dot{y}=f(t,y) Dense, banded, sparse D02MN Sub-chapter Introduction : NAG Library, Mark 29

† The M3RK routine of Verwer, and the RKC routine of Sommeijer and Verwer can use an estimate of the spectral radius (in case it’s known), to determine the maximum stable step length.

Since the FEM discretization given above is linearly implicit, it can alway be refactored into an explicit form,

\mathbf{\dot{u}} = C^{-1}(A(\mathbf{u}) \mathbf{u} + \mathbf{b}(\mathbf{u})),

hence explicit solvers can be used in principle. In practice we can do this by factorizing the matrix C once at the beginning, and performing a back substitution during the right-hand side evaluation. This is inconvenient as the factorized matrix must be available within the ODE callback function. I’m also afraid that for solvers of the form \dot{y} = f(t,y), which internally use the Jacobian matrix \partial f / \partial y obtained via finite differencing, using a solver such as LSODA, VODE, CVODE, or DDEBDF, would be more expensive than really needed. Moreover, while the matrix C is generally sparse, the inverse will be dense, so the Jacobian will be dense too. This technique is therefore only applicable to strictly explicit solvers.

Further Fortran packages are available, such as FOODIE, rklib, FLINT, or FATODE, but these appear to be focused mainly on non-stiff or mildly stiff problems, explicit time-stepping, and high accuracy.


Which of the solvers above would you recommend?

I know this is not strictly a Fortran-related question, the choice of solver is mostly dictated by the properties of the system of ODEs. Nonetheless, I am hoping that some Discourse members may have previous experience with this type of problem.

I’d also appreciate any pointers to literature discussing the selection of time discretization methods. The most authoritative reference I am aware of is – Verwer, J. G., & Hundsdorfer, W. H. (2003). Numerical solution of time-dependent advection-diffusion-reaction equations. Berlin: Springer. – in which the authors state:

[…] the conventional explicit Runge-Kutta methods are in general of little practical use for diffusion equations. Diffusion equations are often better solved with A-stable implicit methods which provide unconditional stability, or with special explicit methods which possess very long stability intervals along the negative real line.

Based on the recommendations in the Julia DifferentialEquations.jl package, I should pick a method like TRBDF2 (one-step ESDIRK method), Rodas4P (A Rosenbrock method designed for stiff parabolic PDEs), or radau (Implicit Runge Kutta). The MATLAB pdepe solver for parabolic PDE’s in one spatial variable, uses the integrator ode15s, based on variable order Numerical Differentation Formulas (similar to the methods used in GEAR and LSODE). The NAG parabolic PDE solver - dim1_parab_fd - that uses the same semi-discretization algorithm as pdepe in MATLAB, uses backward differentiation formulas (BDF).

4 Likes

This very recent Computers and Fluids paper may (or may not) be of help. You might also consider a higher-order scheme (DG, spectral element etc) coupled with an implicit or explicit Strong Stability Preserving (SSP) RK scheme. With the HO schemes you might not need as fine a mesh to get equivalent results.

See

Yadav, Singh, Maurya and Rajoot, ā€œNew RK type time-integration methods for stiff convection-diffusion-reaction systems,ā€ Computers and Fluids, Vol 257, 15 May, 2023.

https://www.sciencedirect.com/science/article/pii/S0045793023000907?via%3Dihub

To provide a constructive response to your question, it is required to specify the needs and constraints (accuracy, execution time, the feasibility of effective parallelization, the possibility of step adaptation, the presence of monotonicity property, and so on). There is no one ideal approach for time integration, different integrators may be optimal for each given problem. To solve simple parabolic equations, a basic implicit Euler approach (which is L-stable and unconditionally monotone) is sometimes sufficient.

Because non-physical oscillations may arise in the solution, only A-stability may not be sufficient; the L-stability quality helps you to avoid this problem.

1 Like

I’ve had good results with the ROCK2 code, which is a variant on RKC. I found I did need to switch off the PI timestep regulator, and the spectral radius code sometimes returned NaN - I replaced that with the upper limit to the routine.

How about using ā€˜Semi implicit Fourier spectral algorithm’ ? It allows much larger time step sizes.

Here is the link of the article

https://www.sciencedirect.com/science/article/pii/S001046559700115X?via%3Dihub

The abstract says

An efficient and accurate numerical method is implemented for solving the time-dependent Ginzburg—Landau equation and the Cahn—Hilliard equation. The time variable is discretized by using semi-implicit schemes which allow much larger time step sizes than explicit schemes; the space variables are discretized by using a Fourier-spectral method whose convergence rate is exponential in contrast to second order by a usual finite-difference method.

The implementation is also bit straight forward. I have used it in Matlab because of the inbuild support for fft.

We developed a simple algorithm for time-stepping the Schrƶdinger equation:

i\frac{\partial}{\partial t}\phi_i({\bf r},t) = \hat{H}[\phi(t)]\,\phi_i({\bf r},t),

where i=1\ldots N labels a particular orbital.

Our primary concerns were unitarity, stability and efficiency. Because \hat{H} is time-dependent, the time evolution is non-linear making time-evolution challenging.

The algorithm is as follows:

  1. Expand \phi_i({\bf r},t) in a suitable basis: \phi_j({\bf r},t)=\sum_i c_{ij}(t)\chi_i({\bf r})
  2. Determine H_{ij}(t)=\langle\chi_i|\hat{H}(t)|\chi_j\rangle
  3. Solve the eigenvalue problem H_{ik}a_{kj}=\epsilon_j a_{ij}
  4. Time-propagate using c_{ij}(t+\Delta t)=\sum_{kl}a_{ik}e^{-i\epsilon_k\Delta t}a_{lk}^* c_{lj}(t)
  5. If t<T goto step 1

We have found that this allows for quite large time-steps (in fact it’s unitary for arbitrarily large steps). It’s also fast if you choose a basis which is small but complete enough for the accuracy required.

If your can recast your problem as

\frac{\partial u}{\partial t} = \hat{H}[u(t)] u

at each time-step, you may be able to use a similar algorithm.

(How do you get single line equations on this forum? The usual LaTeX $$ … $$ doesn’t work.)

I have a little bit of practical experience with implicit integrators for convection-diffusion type equations. But first, a question: are you stuck with a non-diagonal mass matrix? I do not have a lot of experience with finite-element methods, but my general feeling is that a finite difference or (pseudo)spectral discretization will be simpler, if for no reason other than the fact that the mass matrix will be the identity or diagonal. Getting rid of C will immediately widen the range of canned routines at your disposal.

Methods I have personal experience with:

  • Radau (used from Scipy). Generally awesome when high accuracy is wanted. I have never used it in a PDE method-of-lines context, so I can’t speak to how it performs on high-dimensional systems. But on the problems I’ve used it, I’m very impressed with its accuracy compared to the other stiff methods in Scipy (BDF, LSODA).
  • TRBDF2. I used this one in the context of a large (N\sim1000) hyperbolic equation which supported lots of unwanted fast oscillation modes. Trapezoid/Cranck-Nicolson was unsuitable because it has no diffusivity to suppress the unwanted modes, while also introducing serious aliasing errors to those modes and spoiling the solution. TRBDF2 has just a tiny bit of diffusivity, which was all I needed.
  • Backward Euler. Sometimes you want to just march toward a steady state and don’t care too much how you get there. Many domain-specific implicit methods are basically backward Euler with specialized iteration strategies to preserve certain properties (conservation, monotonicity, etc…)

Another family of methods worth mentioning is super-time-stepping. I have no direct experience with these, but I have read that they are especially good for explicitly integrating parabolic problems. This paper gives an approachable introduction to the method.

1 Like