An excellent resource for beginners is the book Modern Fortran: Building efficient parallel applications by @milancurcic, particularly if you’re interested in using Fortran for scientific computing applications. The common theme of all the chapters is the gradual implementation of a well-structured solver for the shallow water equation. The numerical techniques employed are sufficiently simple and intuitive that you don’t actually need to know much about numerical analysis (e.g. finite differences, time-integration schemes, etc) to understand the gist of it and can actually focus more on the good coding practices in Fortran. I’ve actually used the book as learning material for an introductory course on scientific computing in my university and received pretty positive feedback from the students. We’ve also used bits and pieces of it with high-school students who came to do a two weeks internship in the lab and they’ve been able to follow along pretty easily.
Then, it kind of depends on your background and possibly already existing skills in other languages. In general, whenever I learn a new language, I start by re-implementing relatively simple algorithms that I’m very familiar with in the other languages. In this phase, I usually put a lot of emphasis on using only intrinsic features of said language rather than looking at more advanced libraries. It often allows me to get a pretty good feeling for how similar or dissimilar the two languages are from a syntax point of view as well as some intuition about how easy (or not) one is to use compare to the others.
I have a background in applied mathematics (numerical linear algebra, convex optimization and control theory mostly) as well as computational fluid dynamics. The typical set of algorithms I tend to implement whenever learning a new language are the following:
- Linear Algebra : solving Ax = b
- Jacobi solver for diagonally dominant matrices and/or symmetric positive definite ones.
- Gauss-Seidel solver for the same matrices.
- Gaussian elimination for solving a square system of linear equations.
- QR factorization using the modified Gram-Schmidt orthogonalization process.
All of these algorithms can easily be implemented using only intrinsic features of the language (e.g. matmul
, dot_product
, norm2
, do
and do concurrent
etc). I usually restrict myself to fairly simple implementations first to really get the gist of it (e.g. typically, I do not use pivoting for Gaussian elimination or QR decomposition to begin with). My typical rule of thumb is to start with what seems to me like the most naïve way to implement a given algorithm and only incrementally improve the code to get a better understanding of what changes actually improve or degrade the performances. Do not hesitate to run your codes with matrices of varying sizes and to use even a simple call to system_clock
to time your code and get somewhat quantitative measures of performances.
- Convex optimization : minimizing a convex quadratic form f(x) = \dfrac12 x^T P x - x^T q
- Gradient descent with fixed step size
- Gradient descent with optimal step size (steepest descent)
- Conjugate gradient
Once again, all of these algorithms can be implemented using only standard features of a language. If you’re familiar with convex optimization, you can eventually start making the problem more complex by introducing linear equality or inequality constraints.
- Ordinary and partial differential equations
- Simulating the Lorenz system in the chaotic regime (already mentioned by @hkvzjal) using a Runge-Kutta scheme.
- Simulating the unsteady heat equation on a square domain using finite differences and a semi-implicit Crank-Nicolson scheme. This one allows you to potentially re-use the linear algebra bits implemented earlier.
- Simulating the Navier-Stokes equations for the lid-driven cavity flow using a vorticity-streamfunction formulation. Admittedly a more advanced project, but once you’re familiar with the rest, it should still be relatively easy to do.
I do acknowledge that it is a fairly biased list of learning projects. They are however sufficiently simple that you should be able to easily find implementations in other languages against which to compare your solution for validation purposes. They are also sufficiently simple that each would typically require only a couple of hours to complete while still exploring a fairly large set of the language intrinsic features. Once I’ve implemented these and I’m happy with my understanding, I then start trying to understand how I can use more low-level language optimization to improve the peformances (e.g. cache blocking, loop unrolling, etc) and replace some bits and pieces with fairly standard libraries/packages (e.g. lapack).