I really like this thread as it shows how easily this community can be derailed by hinting to possible non-performance of Fortran … totally understandable given that the main sale point(i.e. when advising your PhD student what language to learn) is high performance. If the performance goes out of the window … what is left??
Some more marketing headache to come:
Comparing the C++ program below
#include <cmath>
#include <vector>
#include <ctime>
#include <numeric>
#include <random>
#include <iostream>
#define ff double
int main( void ){
std::vector<ff> x(1000000),y(1000000);
std::mt19937 g(12345);
std::uniform_real_distribution<ff> dist(0,1);
for(int i=0;i<x.size();++i) x[i]=dist(g)*(ff)200;
x[0]=(ff)1;
int n=10;
double time=(double)0;
for(int i=0;i<n;++i){
std::clock_t c_start = std::clock();
for(int i=0;i<x.size();++i) y[i]=j1(x[i]);
std::clock_t c_end = std::clock();
time+=(double)(c_end-c_start) / CLOCKS_PER_SEC;
}
ff time_elapsed_ms = time/(double)n;
std::cout << "CPU time used: " << time_elapsed_ms << " s\n";
}
and the code provided by @FortranFan I got the following
gfortran -O3 -ffast-math tmp.f90
rcs > ./a.out
BESSEL_J1 Evaluation: Mean compute time for 1000000 evaluations:
7.1370466001098976E-002 seconds.
Standard deviation in calculated results: 0.0000000000000000
ifort -O3 tmp.f90
rcs > ./a.out
BESSEL_J1 Evaluation: Mean compute time for 1000000 evaluations:
5.189895629882812E-002 seconds.
Standard deviation in calculated results: 0.000000000000000E+000
g++ -O3 -ffast-math tmp.cpp
rcs > ./a.out
CPU time used: 0.0444952 s
implying that ifort needed 17% more time and 62% more time that the C++ program. But most likely I don’t compare apples with apples.