Why doesn’t Fortran have a built-in matrix inverse function? This function is used a lot, and I feel like it would be easier if people don’t have to use Lapack.

Perhaps because the user can write her own or find code to compute the inverse, and because some users would translate

**A** * **x** = **b**

to

`x = matmul(inv(A),b)`

which is a suboptimal way to solve a set of linear equations. Maybe stdlib should define a .div. operator so that one can write

`x = b .div. A`

and get an efficient implementation? In Matlab one writes

`x = b \ A`

Rather, one writes `x = A \ b`

, because `x = b \ A`

means “multiply the Penrose-Moore pseudoinverse of b by A”, which is quite different!

Some relevant reading for those looking for matrix inverses:

Actually it’s mostly `solve`

that is used a lot, not matrix inversion, since it is not as accurate and has other issues as @ivanpribec pointed out.

That being said, `inv(x)`

is useful for just experimenting or testing or debugging, I’ve used it a few times. It should go into `stdlib`

into its linear algebra module.

Two examples where matrix inversion is used quite a bit are in FEM codes where you need to compute the inverse of the Jacobian matrix for isoparametric elements and in some constitutive models where you need to compute the inverse of the deformation tensor. Fortunately these are at most 3x3 matrices so you can write your own MATLAB like inv function using direct inversion or Cramers rule. However, you might need to do the inverse for each element in a FEM model comprising several million elements. Don’t presume that inverses aren’t important just because your applications don’t use them

I created a small GitHub repo

Matrix_Inversion

Matrix inversion using Crout’s method, calling Fortran code by Alan Miller.

Does anyone have any suggestions/links or Fortran code snippets for the matrix inverse using the Krylov method?

What is the matrix rank/size for this to be true ?

I am sure for the use of a symmetric Jacobian (3x3), it is best to use the inverse.

Probably for a 4x4 and also most matrices used in FE element generation.

It is probably not until n is much larger or the matrix is sparse that an alternative to the inverse is best used.

For solving A . x = b, it also depends on how many columns might be in b.

I would expect it really would be useful for a function inv (A).

However, it would need to be able to identify if A is symmetric and also report if the inverse does not exist, which makes the function inv (A) a bit deficient in it’s reporting.

Prior to F90, the need for temporary storage was also a problem for providing an elegant solution to inv (A).

For these kinds of matrices you don’t want to call lapack, but rather some specialized inversion that you write.

I don’t presume that. I have written such `inv`

among other linear algebra utilities: https://github.com/certik/fortran-utils/blob/b43bd24cd421509a5bc6d3b9c3eeae8ce856ed88/src/linalg.f90#L34. The `inv`

function belongs to stdlib (or a dedicated linear algebra library).

All I am saying is that you don’t want to use such an `inv`

in production code (except perhaps some very specialized cases, in which case I would like to hear about them). Regarding your FEM code, I think hand written inversion (or some other algorithm) will be faster than using this `inv`

for your 3x3 matrix.

I’ve needed a matrix inversion many times myself, see e.g. here: https://github.com/certik/dftatom/blob/fe479fd27a7e0f77c6a88a1949996406ec935ac2/src/rdirac.f90#L70, where I needed an inversion of a 2x2 matrix. As you can see, I’ve “inlined” it by hand, as that was the fastest in this particular case.

In addition to the comments of others, where for example a linear equation solution is needed rather than an actual matrix inverse, there are other reasons to avoid computing an actual matrix inverse. Consider the situation where multiple A*x=b solutions are needed for the same A but different vectors b, not all available at the same time. This looks like it might be suitable for a matrix inverse, but even this is not the right approach. In this case, what you want to do is to factor A, using Gaussian elimination, perhaps with pivoting if it is structured poorly, and then solve for the multiple right hand sides using that factored form. Or if A is possibly singular, or numerically nearly singular, you want to do something like QR decomposition or singular value decomposition (SVD) to reveal the numerical rank, and then solve the equation using those factors. If you look at a linear algebra library, such as LAPACK, you will see dozens of ways to solve linear equations, each appropriate for some particular kind of matrix or some particular matrix strucrture (banded, symmetric, hermitian, positive definite, etc.). A matrix inverse is a useful formal entity, when solving problems symbolically, but it is seldom a useful computational entity.

As rwmsu says, there are niche applications where matrix inverse is actually what you want. But most of the time, you want to solve a linear system. Computing the inverse and multiplying by the RHS is about twice as much work as factoring the system and back solving. If the matrix is symmetric, use Cholesky’s method, which is even less work than LU decomposition (because L and U are transposes of each other if the matrix is symmetric).