Note that this benchmark is really flawed. I believe the results they see come from the fact that their Julia code sucks in ways their Fortran doesn’t. For example, the Julia version defines
function JacobiConstant(odesol)
X = odesol[1:3,:]
Xdot = odesol[4:6,:]
~,n = size(X)
# distances
R1 = copy(X)
R2 = copy(X)
R1[1,:] = R1[1,:] .+ mu
R2[1,:] = R2[1,:] .- 1 .+ mu
r1 = [norm(R1[:,i]) for i = 1:n]
r2 = [norm(R2[:,i]) for i = 1:n]
# radius and velocity nagnitudes
R = [sqrt(sum((X.^2)[:,i])) for i = 1:n]
V = [sqrt(sum((Xdot.^2)[:,i])) for i = 1:n]
# pseudo-potential
Omega = 1/2*(R.^2) .+ (1-mu)./r1 .+ mu./r2
# Jacobi's Constant
C = (V.^2)/2 .- Omega
return C
end;
While in Fortran, they define
function JacobiC(mu, X)
real(wp), intent(in) :: mu
real(wp), dimension(:,:), intent(in) :: X
real(wp), dimension(size(X,2)) :: JacobiC
real(wp), dimension(size(X,2)) :: Omega
real(WP), dimension(size(X,2)) :: r1, r2
r1 = sqrt((X(1,:) + mu)**2 + X(2,:)**2 + X(3,:)**2)
r2 = sqrt((X(1,:) - 1.0 + mu)**2 + X(2,:)**2+X(3,:)**2)
Omega = 1.0/2.0*sum(X(1:3,:)**2,1) + (1.0-mu)/r1 + mu/r2
! Jacobian's Constant
JacobiC = sum(X(4:6,:)**2,1)/2 - Omega
end function JacobiC
The Julia version here is incredibly inefficient because when you do something like [sqrt(sum((X.^2)[:,i])) for i = 1:n]
, you are computing X.^2
n
times only to throw away all but 1 column, and allocating n
matrices and n
vectors to throw those away by summing over them. Just using @views
appropriately, and doing less stupid stuff should easily regain the performance defecit here. Since this is the core of the benchmark, it’s kind of hard for me to believe that the authors are trying to make a fair comparison, when the Julia code is so obviously stupidly written (for this function, they would have gotten way better performance by just following the same logic that their fortran version used, instead of randomly computing a ton of info, allocating a ton of arrays, and then throwing them away).