Why is my code compiled with GFortran on Windows slower than on Ubuntu?

If you have multiple versions of Gfortran installed along with MinGW, Cygwin, WSL, etc., you have to be careful not to get the paths and environments mixed up. Here is how to check.

After building an EXE, run the Cygwin ldd utility on it. For the version compiled with Cygwin+Gfortran, I see:

T:\LANG\RChen>gfortran -O2 ran.f90 samplers.f90 EM_mix.f90

T:\LANG\RChen>ldd a.exe
        ntdll.dll => /cygdrive/c/WINDOWS/SYSTEM32/ntdll.dll (0x7ffabeee0000)
        KERNEL32.DLL => /cygdrive/c/WINDOWS/System32/KERNEL32.DLL (0x7ffabd8a0000)
        KERNELBASE.dll => /cygdrive/c/WINDOWS/System32/KERNELBASE.dll (0x7ffabc940000)
        cyggcc_s-seh-1.dll => /usr/bin/cyggcc_s-seh-1.dll (0x3f7530000)
        cygwin1.dll => /usr/bin/cygwin1.dll (0x180040000)
        cyggfortran-5.dll => /usr/bin/cyggfortran-5.dll (0x3f6e40000)
        cygquadmath-0.dll => /usr/bin/cygquadmath-0.dll (0x3f1d90000)

For the MinGW+Eq.Com.Gfortran built a.exe, ldd reports:

        ntdll.dll => /cygdrive/c/WINDOWS/SYSTEM32/ntdll.dll (0x7ffabeee0000)
        KERNEL32.DLL => /cygdrive/c/WINDOWS/System32/KERNEL32.DLL (0x7ffabd8a0000)
        KERNELBASE.dll => /cygdrive/c/WINDOWS/System32/KERNELBASE.dll (0x7ffabc940000)
        msvcrt.dll => /cygdrive/c/WINDOWS/System32/msvcrt.dll (0x7ffabdf40000)

If, when you run ldd on what you think is a Cygwin built EXE, you see “msvcrt” in the output, that is a sign that your paths are mixed up.

1 Like

Bite your tongue! Don’t blame Gfortran without checking! That was the whole point of my long post.

I just checked the Windows Julia 1.7.2 installation using ldd, and I find that many of Julia’s DLLs are built with MinGW, as well. This leads me to suspect is that if you compare Julia run times on Windows and Linux the Windows versions will come out as significantly slower. MinGW makes it convenient to port Linux applications to Windows, but there appears to be a price to pay.

Just as with Gfortran, it would be unfair to compare Ifort and Julia on Windows and conclude that Julia is slow compared to Fortran, without taking into account the reliance of Julia on MinGW.

Similarly, it would not be quite proper to compare the performance of a Linux application/package such as Julia to a MinGW Windows port of the same package and conclude that “Windows is three times slower than Linux.”

1 Like

Thank you again @mecej4 ! Nice.

I think we are getting closer to fix the problem.

Now here is the thing. I installed cygwin64 from
https://www.cygwin.com/
I did

Install Cygwin by running setup-x86_64.exe

as the website suggests.

As you can see from my desktop, there is a cygwin64 terminal icon there, I double clicked that icon, it seems I should be in cygwin64 terminal enviroment, however, as you said ( again, thank you very much indeed for the ldd trick, that is indeed helpful! ), perhaps my path is mixed up, because I do see my cygwin64 did not generate the one as you show, I have msvcrt as below,

  1. Do you know how to fix the environment variables?
    My path variables are ,

Are you suggesting deleting the path of all other gfortran versions? Such as like gcc/bin/, etc?

  1. By the way, sorry if it is stupid question, is there a way to call the gfortran and Make in cygwin without using the cygwin64 terminal?
    I briefly checked the folders of cygwin64, but I did not find things like gfortran.exe or make.exe.

Thanks much!

PS.
In cygwin64 terminal, the ‘’‘clean’’’ part in make should be the same as if it is in pure linux,

clean:
	rm -f $(EXEC) *\.mod *\.mod0 *\.smod *\.smod0 *\.log *\.o *~

The previous,

clean:
	@del /q /f $(EXEC) *.mod *.obj *~ > nul 2> nul

works for the Make in equation.com’s gcc pack for windows.

Yes, I see the problem in your screenshot that shows %PATH%. It includes the following line:

c:\gcc\libexec\gcc\x86_64-w64-mingw32\11.2.0

You do not have to use the Cygwin Terminal; I rarely use it. You can use Cygwin tools from any CMD or other command shell, if you make the Cygwin /bin and /usr/bin directories accessible through PATH. Regardless of how you go about this, it is ultimately your responsibility to arrange the environment to work correctly for the task that you are currently performing.

If you need more help with Windows %PATH% and other environment settings, you can consult web pages or manuals, or a local expert.

1 Like

It seems that MinGW (I may have confused MingW with MinGW-W64) is constantly updated, and its last update wasn’t in 2013. See:

PS. The difference between MinGW-W64 and MinGW is that MinGW compiles only 32-bit executable programs, while MinGW-W64 compiles 64-bit or 32-bit executable programs.

1 Like

I may have been a bit careless in that I simply looked up this MinGW distribution that seemed reasonable as a source of “toolchains targetting W64”. I do not know which MinGW/MinGW-w64 tools are used by Equation.com and JuliaLang.org to build their compiler distributions.

There is no Julia package available from Cygwin, but they do have distributions that contain “MinGW” in their names/descriptions.

There is a thread “Julia slower in Windows” on JuliaLang-Discourse in which slowdowns have been noticed in Windows versus Linux.

These slowdowns do not seem to be present for Linux packages run under WSL-1, which I have used sometimes. I do not know about WSL-2, and it would be interesting to hear from someone who uses Gfortran or Julia on WSL-2.

in general, the main slowdown Julia has on wsl is file system stuff. my guess for fortran is performance of libm. the windows libm implementations are often subpar, which Julia fixes by not using system libm. (this also has the advantage of better cross platform reproducability()

Thanks for the comment, @oscardssmith.

Before joining this forum, I had barely heard of Julia. Some of the posts in fortran-lang.discourse named Julia as a threat to Fortran, so I searched and read some articles claiming that Julia was xxx times faster than Fortran, etc. I felt that the claims were not credible, and wanted to give Julia a test-drive. I downloaded the 1.7.2 distribution for Windows, and ran a few examples of nonlinear regression.

The Fortran version compiles and links in ~ 2 seconds, and the run takes less than a second. A second run of the EXE takes 0.04 s, since the DLLs are now cached. The Julia version took 25 seconds for the first run, of which 17 s were used to process the directive using PyPlot . A second run while staying within the Julia REPL took 0.4 s. What disappointed me was the inability to compile stable source code into object files, EXEs and libraries and the huge startup time for running any Julia program unless one stays inside the REPL.

5 Likes

Any idea about how the same examples compares in linux?

Thank you @mecej4 !
I did install cygwin64, and gfortran, and make there. Indeed, I got the about the same performance as yours on Windows,

Although still, gfortran’s performance on windows (0.78s) is a bit slower than on Linux (0.6s), that difference is acceptable for now.

Have not tested MinGW-W64 yet, but for now I can say, at least based on my experience up till now, on windows, the gfortran in cygwin64 is the fastest.
Its performance combined with openmpi within cygwin64 still need to be checked.

1 Like

On WSL-2, or hyper-v, or VMware, or native Ubuntu, at least for my this code and some other codes, my experience is that the gfortran’s performance are about the same.

The only ‘issue’, I hope you do not mind me saying so, is just that, I mean, from a user’s point of view, is that, just on Windows, there are many different versions of gfortran (you know, mingw, cygwin, equation.com etc), other than the gfortran in cygwin64, most other versions’ performance are not good enough (I believe it is exactly due to what you said, the DLL issue). If the users (they are not experts in Fortran) did not install the best gfortran version, and they found the code runs slow, they will just complain about gfortran.

However @mecej4 , after some more checking,
I found that while the gfortran in cygwin64 perform good for my this small code,
I have some more complicated code (roughly those PYQ_I results you mentioned are replaced by some results from ODE solvers) and the gfortran in

performs 3X better than cygwin64’s.
:rofl: I am a little puzzled.

I mean from a user’s point of view, it seems the windows versions of gfortran’s perfomance may not be the most consistent. However, gfortran’s performance seems to be consistent on Linux (even on Linux virtual machine) and Mac when works.

But I know Intel OneAPI depend on Visual Studio when building and linking, while gfortran does not depend on Visual Studio. There is perhaps something in Visual Studio did the magic.

I have used two sources of gfortran on windows; mingw-w64 and 64-bit equation.com. ( each alternative requires careful setting of the path environment variable, which may be an issue for this thread?)

What I note is that equation.com’s version produces much larger .exe files than MinGW-W64, which I attribute to use of fewer .dll dynamic links. As a consequence, there is a slower initial startup of the .exe, but less overhead once the program starts. By using timers initiated during the run, this eq… version has slightly faster measured computation performance, as .dll loading is not included in my test run times.

This is an interesting thread, as I have not found (equation.com’s) gfortran on windows to have poor performance, although it is important to note my testing is for a different run profile, where my tests are for intense computation over minutes or hours, not fractions of a second, where the startup time is significant. You only connect the .dll’s once then this delay is not repeated.

Most of the comparison of Julia to gfortran in this thread appears to focus on the startup and some intrinsics, while I am more focused on multi-threaded AVX computation. I can’t conceive that Julia would be faster than gfortran for the types of computation I am doing, but there are always going to be types of computing that suit a particular language.

I reviewed the code in post #9 to see if I could identify types of coding that may not suit gfortran.

  • lots of tab characters in the code, which is not portable and made it difficult to test with other compiler tools I use.
  • I am not familiar with ishft (i,j), especially where i and j are different kinds. I suspect “j” should be a standard integer?
  • auto-allocate is used, eg “qt1 = mu01 + sig01*gaussian(nsub1)“, although this is not a significant cpu time usage.
  • subroutine steptest uses “do concurrent( i=1:nsub, k=1:kmix )”, although this is not a significant cpu time usage. Not sure why this is adopted ?

However, from my win64 > equation.com:gfrotran testing, most of the time is consumed in Function pYq_i_detail. (called 10 million times). This is provided as an external function argument to subroutine MC_gauss_ptheta_w_sig, which is called via subroutine prep.
Changing from being used as a supplied function argument to an explicit function use, this did not change the performance.
It uses intrinsic exp and **2 and does not appear to utilise avx.

Perhaps Julia has a better exp implementation ?

1 Like

This is a bit late, but I just now noted that you have a potential major bug in your source file ran.f90 .

At the beginning, you have the declaration

integer, private, parameter :: i8=selected_int_kind(15)

Later, you have several variable declarations that use the kind number i8, such as

integer(kind=i8),  parameter :: mult1 = 44485709377909_8

This is correct if and only if i8 = 8. Whether this is true or not depends on the compiler. For Silverfrost FTN95 (Windows) and NAG (all platforms), this is not true.

Before you forget and run into trouble, change _8 to _i8 in your code, and take care to avoid repeating this mistake. Fortunately for you, the error will probably be caught at compile time by a compiler for which i8 /= 8, but you cannot count on that.

1 Like

I tried to investigate why Windows gfortran’s implementation of function pYq_i_detail is reported to be slower than others.
I tried to introduce array syntax, below, but this did not achieve a run time improvement.
(ooops, just noted an error with mean(mi), but did not change error report below!!)

  function pYq_i_detail_array (theta,i) ! in principle this should be faster than pYq_i.

!  modified to introduce array syntax into calculation, wityhout improvement !!

    real(kind=r8) :: pYq_i_detail_array, theta(dim_p)
    integer(kind=i4) :: i

    integer(kind=i4) :: j
    real(kind=r8) :: fact_mean, log_pYq_i, product_sigma_inv, mean(mi), sigma_inv(mi), log_pYq_j(mi)

    calls_pYq_i_detail = calls_pYq_i_detail + 1

    fact_mean         = D/theta(2)
    do j=1,mi
      mean(j)       = fact_mean / exp (theta(1)*t(j))
      sigma_inv(j)  = abs(sig*mean(j))
    end do
    log_pYq_j(:)      = (Yji(:,i)-mean(:))/sigma_inv(:)
    log_pYq_i         = -half * dot_product (log_pYq_j, log_pYq_j)
    product_sigma_inv = normpdf_factor_pYq_i / product(sigma_inv)

    pYq_i_detail_array  = product_sigma_inv * exp(log_pYq_i)
    return
  end function pYq_i_detail_array

However, the most likely indicator as to why the Windows vesrion is slower is the following warning at the end of the run:
Note: The following floating-point exceptions are signalling: IEEE_INVALID_FLAG IEEE_DIVIDE_BY_ZERO IEEE_UNDERFLOW_FLAG

It looks like the data set based on random values is triggering IEEE warnings, which can significantly increase run time.
Perhaps Julia and other implementations are not doing the checking they should ?

I think you need to generate a more reasonable data set.

1 Like

Thank you @mecej4 and @JohnCampbell .

Eh, yeah, I mean, the thing is, exactly the same code, let us just say gfortran.
The underflow warning existing on both Linux and Windows.
However, gfortran on Linux perform normal which tooks 0.5s,
However on WIndows it took 3s for equation.com version, it took 0.7s for cygwin64 version (perhaps this version is the same your MinGW64 version).

The 0.5s .vs. 3s is probably not caused by overhead, it is consistently 6X slower.
You may change the value of mgauss from 1000 to 5000 as show in line 55 in the file EM_mix.f90, you will see on linux it took 2.5s while on windows it took around 15s, again linux version is 6X faster.

I mean, again, for this small code, equation.com gfortran version on Windows simply 6X slower than on Linux. While cygwin64 version of gfortran seems perform almost the same as on Linux.

However, for more complicated code, I found that equation.com gfortran version on Windows perform similar with on Linux (on Windows it performs 30% slower than Linux but still acceptable). But the cygwin64 version can be 6X slower than on Linux. So on Windows, one cannot simply conclude which gfortran version is the best.

But the bottom line is, I think, or I hope, for the same code same optimization flags, gfortran’s performance on Windows can be almost the same as on Linux. If there are huge performance difference, probably the main problem is not on the code itself. After all, no one wants to write a gfortran code for windows particularly, right? :rofl:

PS.

On Linux it has the same warning but perform just fine.
Intel Fortran does not show warning, and its performance is consistent on Linux and Windows.
Julia code does not have that warning.

1 Like

Thank you @mecej4 , yeah the ran.f90 is a little sloppy in this code. I can change that _8 to _i8.
But that does not solve the performance puzzle on Windows. But thank you very much indeed all the same :slight_smile:

Thank you @JohnCampbell too!

Yes, j should be just integer, I just need to change all the _8 to _i8 in ran.f90, such as

   integer(kind=i8),  parameter :: mask24 = ishft(1_i8,24)-1
   integer(kind=i8), parameter :: mask48 = ishft(1_i8,48)-1

Eh, the code is below, may I ask what is the problem of using do concurrent?

	do concurrent( i=1:nsub, k=1:kmix ) ! change mgauss_ik to minimize delta Ni.
		mgauss_ik(i,k) = min(max(int( wknik_sig(i,k)/norm_sig(i)*mgauss_tot),mgauss_min),mgauss_max)    
	enddo

About optimization, @JohnCampbell
function pYq_i_detail is basically below,

I know in this small code function pYq_i_detail is the most time consuming part, so I tried my best to optimize this part, and did many experiments, finally I believe my this implementation of function pYq_i_detail should be the fastest one can get. :slight_smile: Overall, what I did is basically instead of calcualting the product of exp(stuff)*exp(stuff)*exp(stuff)*exp(stuff)*exp(stuff)..., I calculate the sum=stuff+stuff+stuff+... first, then finally do one exp(sum). In fact, in some cases, the real thing we need is just the sum, so using sum can escape the exp(sum) exploding issue.

IEEE warnings can significantly change run times on windows.

I have had this performance problem in the past, when trying to benchmark a linear equation solver with a large “array” of random numbers.
Can you produce a more realistic data set ?

I also note that “mgauss” can vary depending on the data, so a different random number set can change the computation ?

1 Like

Thank you very much @JohnCampbell !

Can try.
The warning message is below,

Uhm, may I ask, but why cygwin64 version of gfortran perform fine on Windows for this code (it took 0.7s)?
On the other hand, if the real data really have underflow problem, then how can we ‘fix’ that?

I can be absolutely wrong, but I feel underflow may not be a very uncommon issue, and it seems more or less should be the compiler’s job to handle it neatly.
Intel Fortran on windows, for example, does not show any such message, and its performance is consistent on Linux and Windows.

mgauss is a value which means the samples of Monte Carlo simulation. Its value is set at line 55 in EM_mix.f90 and then it never change.
In this code, we try to find parameters to fit the data Y(mi,nsub) as at line 122. The data has some random noise in it, so it depend on the random seed a little bit,

However, if you increase nsub at line 47 from 100 to 200 or more as below,

so the data become bigger, then the effect of random noise in the data decreased, then the value of the random seed which is irn at line 43 should not influence the result noticeably. Although the value of likelihood may depend on the random number seed (because different seed give different data Y(mi,nsub) so different likelihood), the parameter evaluation should not be influenced by the seed noticeably.
Such as below

The value of LogLike which is log likelihood depend on the data therefore seed, however the parameter estimation such as w1,w2,Mu1,Mu2,MuV,Sig1,Sig2,SigV,Sigma does not depend on seed noticeably.

You may also increase itermax at line 46 from 50 to 100 or any number, simply to increase the run time by doing more iterations.

If mgauss is set bigger than like 1000, and nsub>=200. Then merely changing random number seed should not influence the parameter estimations very much.

In terms of speed, because the total number of iteration is fixed, the time for each iteration is almost the same, so different random number seed should not influence the computing time noticeably.

Thank you very much @JohnCampbell .
Uhm, in fact, I am not very sure which part of my code triggered the ieee warning.
But again, cygwin64 gfortran has no big performance issue for this small code.
My data which is Y(mi,nsub) at line 122 in EM_mix.f90, a sample is like below, 100*5 data.
100 rows means nsub=100 patients for example, the 5 column is like mi=5 observations (like the value of some drug concentration) for each patient at time = 1.5, 2.0, 3.0, 4.0, 5.5.
The data is not very weird I think.

  2.65401349   2.40026224   1.99623561   1.41169297   0.88217734
  3.45838822   2.98550611   2.92380038   2.08054658   1.33617050
  3.82526123   3.03327603   2.34121218   1.52205476   0.92273596
  2.28668992   2.01973948   1.63948803   1.02627141   0.66650634
  3.40903589   2.82301185   2.34513287   2.01947901   1.34372061
  2.83453767   2.20061227   1.69325516   1.28140681   0.73161632
  2.75068773   2.23587612   1.71830050   1.01674311   0.64559501
  4.04768132   4.01698762   3.53190277   2.52058298   2.14937107
  4.08435874   3.40789423   2.75980292   1.39004878   1.10938533
  2.88367066   2.28895940   1.43275093   1.08778145   0.75014338
  3.12304854   2.84925010   2.36672949   1.54044628   1.02207904
  3.72852625   2.74760371   2.19501208   2.06158678   1.02571065
  3.37808119   2.68092516   2.31043504   1.77355439   0.88879246
  2.53444154   1.59567106   1.26306014   0.82380825   0.43744635
  2.85534255   1.93740693   1.85237050   1.24226383   0.77331081
  3.78665876   3.26490532   2.95064268   1.67801971   1.31745667
  3.05567463   2.65547087   1.81873971   1.23136769   0.69043155
  2.80767064   2.89798552   1.86058972   1.57507134   1.16067511
  3.30489986   2.74398369   2.21825740   1.83461938   0.99241819
  2.98543229   2.24426962   1.18329046   0.66491925   0.51613033
  3.61874634   2.09498495   1.41921642   0.97643742   0.64182735
  4.26231605   4.47587570   3.15080034   1.71895424   1.46397885
  3.50932076   2.61622640   2.56780792   2.02193280   1.51401681
  3.36647134   2.56527024   2.01409438   1.67567619   1.04806922
  4.50191529   3.54567059   2.74994640   2.20710948   1.83609478
  3.22450873   3.24666074   2.62405445   1.98347337   1.27931557
  2.98190040   2.95397037   1.62222461   1.28686358   0.93436784
  4.53695504   3.35934880   3.42288141   2.50322969   1.74250924
  2.62175799   2.46776757   1.86818667   1.46210614   0.85735527
  2.87686982   2.58862926   1.56346886   1.10365948   0.80911058
  3.78168621   3.68270753   3.00255413   1.93803745   1.71793505
  3.51136623   3.39427042   2.09630591   1.53876649   1.23730296
  3.62972062   4.14914361   2.77826374   2.03943793   1.71782968
  2.35057638   2.53769530   1.82456819   1.29633972   0.87441877
  1.89964379   1.79119682   1.24603350   0.81811392   0.42941111
  2.75157807   2.74380225   2.16641154   1.88319029   1.17524137
  2.64993979   2.17434263   1.48572844   1.05423214   0.61687510
  3.95709936   3.42263700   2.50621633   1.75887524   1.04011591
  2.80681462   2.06820029   1.47908019   1.03778862   0.56211488
  2.70812624   2.66001341   2.06492028   1.36820939   0.85063758
  3.10427718   3.65212115   1.75620884   1.68815232   1.03946008
  3.64235765   3.56826708   3.51482799   2.40909684   1.76638592
  4.06031791   2.97112810   2.49563006   1.43784176   1.14875997
  3.34082492   3.11935202   1.84614646   1.58291198   1.03015544
  3.22193708   2.54302857   1.79634980   1.11142443   0.65130767
  3.58215208   2.62073121   2.66638423   1.84202624   1.18265738
  3.35761282   2.39419717   2.23732805   1.91448176   1.03428109
  3.28333030   2.19348442   1.91633994   1.25456125   0.80776490
  3.10520090   2.09426322   1.32296942   1.03168506   0.61926324
  3.38021460   2.03462263   1.34017676   0.88276649   0.66037949
  3.40869343   3.41133803   1.93802175   1.81579843   1.06323731
  3.26968162   2.48983221   1.83106822   1.49248039   0.91877651
  3.46696457   3.40990458   2.69489217   1.65735733   1.46487218
  3.83003837   3.16419455   2.44522934   1.75292499   1.28435507
  3.86301454   2.60621378   2.81370680   2.28488143   1.99767924
  3.55586075   3.72374276   2.62138282   1.83746650   1.59064934
  4.10385990   3.54228113   2.74143151   2.12231404   1.62115274
  2.94285017   3.02266528   1.62344637   1.52739370   1.07545293
  3.30932595   2.75741807   1.77969176   1.41303613   0.88816817
  3.22341324   2.55816465   1.95459119   1.16511846   0.86159415
  2.84202064   2.43873974   1.60431253   1.36484335   0.86942491
  3.45614629   3.01040741   1.96346018   1.82010097   1.37410590
  3.00944969   2.89417444   1.58785266   1.10963363   0.64515890
  2.60424738   2.11331112   1.75463928   1.11998501   0.55229481
  2.99971059   2.47432836   1.56259031   1.14480975   0.67926735
  4.17905512   2.84980486   2.53602972   1.95078086   1.21125491
  3.93488338   3.34035560   2.61486487   1.81986854   1.22665323
  3.07076678   3.22755472   2.28705685   1.86866746   1.22210486
  4.47686033   3.72474003   3.15337092   2.16680869   1.50938806
  3.81167928   2.98266971   2.39673097   1.41967711   1.28361822
  2.65561655   2.72189958   2.30066879   1.76456974   1.26550255
  2.72273553   2.33356039   1.75159175   1.35047290   0.78941703
  2.90349003   2.08078892   1.38951964   0.93077741   0.49728869
  2.93430976   2.22440618   1.82547359   1.41442592   1.09560305
  2.91292812   2.54891278   1.45965571   1.36586297   0.63560401
  3.29525021   2.54570345   2.23982574   1.51381464   1.08854350
  3.94182345   2.97545842   3.02333913   2.54221231   2.21077273
  2.70802491   2.70036692   1.93903276   1.26357362   0.87813818
  3.10064585   2.80358061   2.30459697   1.58583013   1.05427075
  4.37507098   3.57259749   2.88659928   2.35402341   1.53044956
  1.69788520   1.18806704   0.68466912   0.38963738   0.16488946
  1.85341767   1.33030428   0.67399323   0.39109777   0.08488546
  2.13477615   1.40412914   0.76514415   0.38096703   0.19578517
  1.84198433   1.63604246   0.93591308   0.58238534   0.22486241
  1.88057122   1.40977237   0.74471384   0.52176656   0.20271034
  1.88576417   1.56837264   0.88622076   0.45210230   0.16227644
  1.98985910   1.50505439   0.80204031   0.47339445   0.17327696
  1.85425535   1.30655019   0.74395125   0.37028203   0.19013104
  2.06818492   1.30300840   0.67451415   0.43671893   0.15880358
  2.36072067   1.47919785   1.03115733   0.56690544   0.26369481
  1.87204096   1.65757104   0.79792967   0.59266594   0.25140979
  2.00575566   1.53834981   0.88645263   0.50812817   0.23976710
  1.86504634   1.15799719   0.77256567   0.48029002   0.17740237
  2.22557612   2.13963805   0.83146404   0.53317038   0.17601324
  1.79107457   1.74749490   0.99333983   0.55446976   0.24992792
  2.15386902   1.57688534   0.64048563   0.40716069   0.19150684
  2.00206904   1.66489366   1.03325675   0.62492999   0.32246391
  2.14706874   1.56950613   0.79770883   0.51951425   0.14801337
  1.63459898   1.12621802   0.57041317   0.33630108   0.14360490
  1.61198564   1.27536164   0.62245615   0.34712257   0.11828325