Physical constants

His project is much more elaborated than mine! All data sets from 1969 to 2018 are included. And derived constants are calculated. Etc.

I just wanted to have something that generates automatically a simple Fortran module and which can be validated to the last digit.

1 Like

Let me know what you think about the fpm use. Finally, fpm is just used to launch the test Fortran program which generates a NIST-like file.

My first idea was just to put the different versions of the CODATA_constants.f90 module into a directory so that people can pick the version they need. But probably it’s not in the fpm spirit, where it should be usable as a dependency in another project.

I think it would be nice to have the different versions around.

Something like

use codata_v2018
use codata_v2014

possibly one could do use codata_latest to always use the latest one shipped with the library?

2 Likes

Rather than have different versions of the CODATA constants in one module, would it make more sense to have different versions of the project? I.e. in your fpm.toml would be

[dependencies]
codata = { git = "https://github.com/vmagnin/fundamental_constants", tag = "2018" }
1 Like

The problem is that if I define tag versions v2018, v2014 and v2010, how could I add improvements in the modules? Suppose for example that one day I want to add in the CODATA module variables to handle the uncertainty of the constants, I would have to create new tags…

@zerothi, @pmk
yes, I could add a module:

module codata_latest
  use codata_2018
end module codata_latest

in a CODATA_latest_constants.f90 file.
Next update in 2022…

I guess most people will use the latest values and that older versions are important only for reproducibility?

1 Like

I see. Would v2018.1 not be sufficient? It seems a bit weird to me to have multiple versions available at the same time

I don’t think you can simply switch the codata unless you normalize the names first, as done in QCElemental, because some constants are renamed between the years, see:

1 Like

Same for my own needs, I will always use the latest version. But it seems old versions can be important for some people. I guess that if you want to reproduce old computations to the latest digit, it can be needed.

Does a use codata_latest could be OK for you?
A question: if I define my codata_latest module as in my previous post, can I still define something like:

  use codata_latest, only: c=>speed_of_light_in_vacuum

?

I think so. I cannot think of any reasons why it should not be possible.

1 Like

Indeed, the older version is important to reproduce older results to higher accuracy. As an example from my experience, the speed of light changes between different CODATA versions change last few digits in the eigenvalues (printed to 6 decimal digits) of density functional theory results for heavier atoms such as Uranium. Physically you don’t care, but numerically you do if you are trying to reproduce another code’s results to high accuracy.

1 Like

Thanks, you’re right. A short test shows it works.

Yes, and it could be the case for projects including a lot of automatic tests: if the last two digits of a constant changed, it may falsify some automatic numerical tests if they are too strict… So old values could be important for the tests… The situation can be more complicated than it seemed…

1 Like

For the computer scientist working alone on his PC, we can suppose he has the responsibility to change constant’s names in his program if needed.
For bigger projects, this is a problem.

I totally understand the desire to have older versions available. The question is, would you ever need to have different versions available at the same time (i.e. your project uses the speed of light from both 2014 and 2018 in a single execution)? It seems unlikely to me. And thus having them be different versions of the project rather than have them all present at the same time would make more sense.

1 Like

At the same time, probably not unless you write a program to study the evolution of the constants! :wink:
Instead of a git tag, a git branch could be a solution.

Is there any downside of having it available at the same time? It would greatly simplify the maintenance, as improvements to the tooling would be available to all versions, as opposed to having different branches for each version.

1 Like

It would be surely simpler on my side. If I improve the Python script, it will be easier to update the different versions.

1 Like

Form me, it is important to have several versions at the same time.
For instance, if your code is old enough, you may need different versions to pass recent tests and old tests (In my code, I’ve got 4 versions, 3 from codata and 1 from a Handbook)

1 Like

Hi,

I believe my stdlib_codata module is decent enough shape that I can release it for test use: Bob Apthorpe / stdlib-codata Ā· GitLab

I’m coming at this from the position of someone modernizing legacy code, first standardizing and consolidating constants, then increasing precision, then updating the set of constants as new revisions are released. The CODATA 2018 dataset is the most complete and is the least likely to change. All CODATA releases from 1973 onward are supported including a 1969 dataset that was in common use before CODATA was formed.

I started by mechanically translating the allascii.txt file from NIST but this proved difficult to review against source documents so the constants were reordered and grouped in their traditional arrangement. Constant names are mostly derived from NIST descriptions with a suffix to indicate precision; names in other datasets are in the process of being normalized with the CODATA 2018 names to simplify upgrading. Principal data is stored as REAL128 (quad precision) and is cast to REAL64 and REAL32 to support double and single precision. Everything is defined as a parameter so there should be no run-time overhead; it’s up to individual compilers how they optimize this but I tried to make it as easy on the system as possible.

I tried to document my design decisions so people can get a sense of what I was trying to accomplish and what I was concerned about. Also, I put together a small example program to illustrate how I think the modules could be used with attention paid to access control and presenting a consistent simple interface to the application programmer.

The project is set up to use CMake as the automated build system, but that’s mostly for documentation, testing, packaging, and cross-platform support. The nine CODATA module files could just be copied into a project; CMake is not required to use the modules (they are literally just giant lists of PARAMETERs…)

Documentation is built via Doxygen and LaTeX; I’m trying to build formal industrial-style QA documentation in PDF format. Currently documentation is limited to the CODATA 2018 module (503 pages); LaTeX died on lack of resources around 2400 pages otherwise there’s no real reason PDFs for all nine datasets couldn’t be generated. Honestly, the documentation build process is more complex than the compilation and packaging processes combined.

A lot of tedious housekeeping effort is left to do, but I believe the 2018 dataset is stable enough for testing. I’d appreciate any feedback, bug reports, etc. This is all a bit overkill but simple use should be simple and complex use should be possible. If anything, take a look at the README and let me know if I’m missing something regarding design or explanation.

– Bob

4 Likes