The built-in random number generator of GFortran is lightning fast and more than sufficient for my tasks. Is it also used for large-scale Monte Carlo simulations in HPCs or other faster RNGs are used?
I personally do not use the default RNG for large scale simulations.
There is a relevant post below,
I think the biggest issue is that it is not clear that, given the same seed, if the random numbers generated by the default RNG are repeatable. I mean, perhaps different gfortran versions and on different platform the default RNG may give different random numbers.
Also gfortran and intel fortran are likely give different random numbers using their default RNG.
So the same code will generate a little bit different results for gfortran and intel fortran.
The other thing is the period of the default RNG may not be long enough. But perhaps in many simulations this is not a problem.
If repeatable results and the length of the period of the RNG are not important to you, perhaps you could just the default RNG.
What exactly do you not like about the gfortran PRNG algorithm? It is certainly repeatable, if chosen to be so, and the period length is 2^{256} - 1
. It also has the nice feature that you can skip forward an arbitrary number of steps. That allows multithread applications that themselves have long periods and that are unique to each thread.
These features are so good that I’ve wondered if this algorithm should be standardized for all fortran processors (i.e. portable between compilers). Of course if someone wants a different algorithm, they are free to use it, but the gfortran algorithm is very nice in many ways.
My main complaints about how PRNGs work is that they sample only a very small fraction of the [0.0,1.0) floating point number set and that they only return floating point values; many applications would benefit from random integers.
Why not use the random_init function? It is super convenient for replication purpose.
IIRC the Fortran standard does not dictate that a specific algorithm is to be used. Hence the sequence of numbers coming from a given seed will likely change if you switch compilers. It could in principle also change just by updating the compiler version though most serious compilers will likely strive to maintain backwards compatibility.
Reproducibility should be a priority for most simulation software so I don’t think the built in RNG is suitable for OP’s needs.
If you can mix in C++ code in the Fortran program then in the C++ standard library could be an option. It provides implementations for specific algorithms like Mersenne Twister. Standard library header <random>Â (C++11) - cppreference.com
I believe @plevold answered perfectly @RonShepard
I have no beef with gfortran’s RNG,
It is just that I am not sure if given the same seed, different versions of gfortran will give the same random number sequence. So, it is possible that the results are not reproducible, and this will bring inconvenience.
For example, just say for the exactly the same code, you use gfortran 9.0 with its RNG, and you found a bug. Now you updated gfortran to v13 with its RNG, and the bug “disappeared” with v13’s RNG. So you cannot repeat the bug (however it is still there), such kind of thing may bring inconvenience for people.
From the people I know when doing MC simulations, they do not use the intrinsic RNG.
However, the intrinsic RNG should have its use in some cases.
For example, if you just wanted to illustrate some stochastic process like Brownian motion, the intrinsic RNG should be more than enough.
Or perhaps when you are 100% sure the code is bug free, and you feel like using intrinsic RNG will greatly increase the speed, then go for it.
PS.
BTW, for Monte Carlo simulations, if using different RNG gives very different results, that almost for sure means there is something wrong, perhaps not enough uncorrelated samples, or simply bugs in the code or in the algorithms.
Yes, of course. That is because the algorithm is not defined by the fortran standard (or even a de facto standard), so if you want to use a good PRNG with multiple compilers, you must use your own library function. That is a separate issue from saying that the gfortran PRNG is not reproducible or that it has a short period, or that it has some other defect. As I noted previously, if the gfortran PNRG algorithm (which is called xoshiro256**) were taken as the standard, then it would be portable to use among all compilers, and thus it could be used by serious application codes.
Stating the compiler version, flags, linked packages (like math libraries), OS, and so forth that were used to perform a set of computational experiments is a step towards reproducible results. You could even create a container (virtual runtime environment) using Docker or something similar.
I found the following article from 2021 on the topic of reproducibility to be interesting:
The guidelines given in the article of interest here are G4 - Declare software dependencies and their versions, and G7 - Provide clear mechanisms to set and report random seed values.
The article was part of an issue of Philosophical Transactions with the theme of
Reliability and reproducibility in computational science: implementing verification, validation and uncertainty quantification in silico
The introductory article by Peter Coveney et al. gives a summary of the topic and articles included in the issue. If you are up for a slightly more philosophical read, Coveney also wrote the following opinion piece:
containing some interesting insights on the nature of computational science.
The world of HPC is now dominated by GPU hardware for which vendor libraries exist:
For (high-end) AMD and Intel CPU’s you also have:
Disclaimer: I have no experience with MC methods, but I take the existence of these libraries as a sign that for some customers maximum performance matters.