All of the code could be converted to an efficient single-line computation:
pure function getNegLogLike(x, n, mu, sigma) result(negLogLike)
use iso_fortran_env, only: RK => real64
integer, intent(in) :: n
real(RK), intent(in) :: x(n), mu, sigma
real(RK) :: negLogLike
real(RK), parameter :: HALF_LOG2PI = log(2 * acos(-1._RK)) / 2
negLogLike = sum((x - mu)**2) / (2 * sigma**2) + n * (HALF_LOG2PI + log(sigma))
end function getNegLogLike
If n
does not change in your repeated calls, you may want to declare it as a parameter, in which case, you will see another round of performance gain, likely much better than the performance of the above implementation. For an efficient multivariate implementation of this, see for example Sampling a Multivariate Normal (MVN) distribution | ParaMonte: Parallel Monte Carlo Library
If you are sampling this Gaussian model via MCMC methods, I highly recommend you to sample log(sigma)
instead of sigma
. In such case, you would be passing logSigma
instead of sigma
as an input argument to your function. There is no performance benefits, but there are sound Bayesian probabilistic reasonings as to why logSigma
should be sampled and not sigma
. If sampling with MCMC, logLike
will have to be returned instead of negLogLike
.
Also, I forgot to say that the constants must be suffixed with _RK
as in the above to have full 64bit precision. Otherwise, if 64bit precision is not needed, convert everything to real(real32) and your function will become twice as fast.