Well, there you are just measuring compilation time (the function is compiled the first time you call it in Julia). Run the function twice and you will get its performance:

```
julia> @time mydot(a,b,N)
0.013960 seconds (25.05 k allocations: 1.596 MiB, 89.23% compilation time)
250280.42409540198
julia> @time mydot(a,b,N)
0.002155 seconds (1 allocation: 16 bytes)
250280.42409540198
```

Which is roughly the same as the Fortran version (`Time: 0.00295899808406830 s`

).

So, no trickery there. The version using `LinearAlgebra.dot`

is much faster, actually:

```
julia> @time c = dot(a, b)
0.000527 seconds (1 allocation: 16 bytes)
250412.44f0
```

Because it is loop-vectorized. You can achieve that in the custom Julia version by adding some flags (and fixing the fact that you were returning a float64 by initializing with `c=0.`

; the function can be made generic by using `c = zero(eltype(a))`

):

```
julia> function mydot(a, b, N)
c = 0.f0
@inbounds @simd for i = 1:N
c += a[i]*b[i]
end
c
end
mydot (generic function with 1 method)
julia> @time mydot(a,b,N)
0.022222 seconds (44.48 k allocations: 2.825 MiB, 97.20% compilation time)
250412.59497003
julia> @time mydot(a,b,N)
0.000728 seconds (1 allocation: 16 bytes)
250412.59497003
```

(unfortunately `-march=native`

didnāt seem to provide the same speed in Fortran, I thought it would).

My point is: these micro-benchmarks are always a waste of resources, because all these languages can get close to best performance. You just need to know what you are doing. Julia does have a startup/compilation time lag, which is mostly irrelevant for real HPC applications (though it can be annoying in some cases for interactive use, and this is being worked extensively by the developers).

The point raised by @implicitall illustrates why the Julia community is so eager to show that Julia works: incorrect use or interpretation of simple tests can provide bad first impressions.

Concerning the comment of @shahmoradi : I cannot comment specifically about that video that claims 3x speedup over Fortran. But I have the same experience with some packages of mine, and I have spend much more time working on the Fortran codes than on the Julia ones. The best performance I have obtained using Julia is because I could improve the *algorithms* in Julia more easily than I was able to do in Fortran, basically because of better tooling. If I port back the algorithms to Fortran I will get similar performance, of course.

And that should not be a surprise here, after all, isnāt to have better tooling for Fortran that LFortran is being developed? I will not be surprised if in a few years we see posts like āHow LFortran helped me to improve the speed of my (Fortran) code 3Xā. That is what I expect to see, in fact, and this is the reason I am here in the forum, to follow the development of LFortran.

All these languages can generate optimal code. The choice of one over the other should not be because of the performance of properly written and benchmarked code, because they will be the same. The choice should be about syntax, the familiarity, stability, long-term support, distribution, tooling, community, etc. A lot of subjective reasons, and some objective, that favor one or the other.