Fortran and Neural Networks

There is an ongoing discussion on the use of Fortran for Machine Learning on Reddit that I thought would be good to post here in case there are people who could share insights into this topic and invite the interested audience on Reddit to this forum to learn more about Fortran.

4 Likes

As the redditors say, someone doing physics research should probably not be re-implementing neural networks or other algorithms that are already available. However, there is a neural network tailored to electronic structure, based on a 2007 paper:

Fortnet is a Behler-Parrinello-Neural-Network implementation, written in modern Fortran. Using atom-centered symmetry functions to characterize local atomic environments, Fortnet provides easy access to the famous BPNN neural network architecture to predict atomic or global properties of your physical system, featuring powerful but optional MPI parallelism.

3 Likes

Actually, one of the members of this community wrote some very interesting stuff [2004.10652] A Fortran-Keras Deep Learning Bridge for Scientific Computing and [1902.06714] A parallel Fortran framework for neural networks and deep learning I am sure Milan will be able to comment on this for us himself.

1 Like

Thanks, @VladimirF @Beliavsky. Indeed I provided a link to Milan’s work in my comment here: https://www.reddit.com/r/MachineLearning/comments/nge6ht/d_fortran_and_neural_networks/gyujpf0?utm_source=share&utm_medium=web2x&context=3
If the Fortran community goes with the mindset of not developing new tools because they already exist in some other X language, no progress will be made in the Fortran community. With this mindset, all the new Julia packages that are popping up on GitHub every day should never be written because very smart people have already written optimized libraries for such tasks in C/Fortran/Python. Yet they do so, and develop everything from scratch in Julia and no one seems to be bothered by reinventing the wheel there. Only when it gets to Fortran, this becomes an issue.

6 Likes

I think that progress will be made either way. I think that languages should first of all interoperate. So if there is a good library in C++, Python or Julia, we should be able to make it usable from Fortran easily, and vice versa. Then, if there is a need or advantage to rewrite something in Fortran (so that, for example, you can extend it from Fortran itself), it can also be done.

7 Likes

I agree with this wholeheartedly with this.

I will always advocate, with as much of own resources I can put forth, toward rewriting libraries and frameworks in modern Fortran, first and foremost for modern Fortran practitioners but for anyone else who seeks scientific computing not to be clouded by the paradigm shifts in the commercial programming space.

For my own “vision” for Fortran will be for it to reclaim its status as the lingua franca of scientific and technical computing (something that was accepted by many not too long ago, c.f. the link earlier by @ivanpribec toward POOMA: " While the various flavors of Fortran are still the lingua franca of scientific computing, Fortran’s user base is shrinking, particularly in comparison to C++"). To realize this, one needs to do what @shahmoradi writes about Julia: scientists and engineers keen on Fortran need to author and reauthor and refactor codes like “crazy” in modern Fortran!

Fortranners needn’t pay attention at all to the Redditt forum:

Fortranners would do well to think along the lines of the Feynman quote from his book title, “What Do You Care What Other People Think?””!! And proceed to write more and more code in modern Fortran!

8 Likes

I agree in principle that all numerical algorithms should be available in modern Fortran, but rewriting an algorithm from scratch is time-consuming and error-prone. The R statistical language has more than 16000 packages, and thousands of them would be worth having in Fortran. A few hundred have Fortran code. R resembles modern Fortran in having multidimensional arrays that “know their shape”, so that you don’t need to pass array dimensions as function arguments. Thus a function in an R package looks like a Fortran function with assumed-shape array arguments that returns a derived type.

Using the Fortran-RInside: package, one can call R functions from Fortran. A faster way to get more statistical algorithms in Fortran than writing them from scratch would be to create Fortran wrappers for desired R functions. If a wrapper becomes heavily used, or if rewriting the R package in Fortran would make the algorithm faster, a translation can be attempted. R is so widely used in statistics that it can be considered the reference implementation for statistical algorithms.

4 Likes

@FortranFan says it rightly. If @milancurcic had also thought of using some existing NN Python/C++ tools instead of developing a native Fortran library, there would have not been the Fortran package cited here either.

Thank you @shahmoradi for sharing on Reddit and inviting the users there, and thank you @VladimirF for mentioning me.

Neural-fortran came from my own wanting to learn how neural networks work. I largely followed Michael Nielsen’s book (first few chapters, really). At the same time I was writing the chapter on derived types, and I thought neural networks were a good concrete example to use in the chapter. They were not. The editors asked me to re-write the chapter, and I was left with the neural-fortran code.

I noticed a few interesting things then. First, the high-level API turned out to be as simple as Keras’s, and with relatively little code to implement it. So, the fact that Keras is a high-level Python API didn’t really make it a better API than the Fortran one. Second the performance of my naive implementation (the simplest code you’d write when not worrying about performance) was competitive with the C++ Tensorflow one, and that code is optimized. Third, it was trivial parallelizing neural-fortran using collectives. Running on any number of CPUs (single or multi-node) is also trivial. In contrast, it was difficult getting Tensorflow to run in parallel. Following tutorials and the official docs, no matter what setting I used I couldn’t get it over 170% CPU use. And that’s why I wrote the paper.

I may get small funding to implement convolutional networks in neural-fortran. I will know soon if I will. That should greatly expand its usefulness. From the papers I’ve seen so far, CNNs seem to be the most widely used network architecture in physical sciences, and especially geosciences.

It’s nobody’s business to tell you what language you should or shouldn’t use for any task. Fortran is a perfectly fine language for machine learning. The biggest downside may be that it’s much more difficult to hire Fortran programmers compared to C++ or Python.

10 Likes

Many sophisticated tools, especially in FOSS, had a modest beginning as pet projects of some enthusiastic learner. No example is greater than linux kernel.

I think NNs are set for raging another revolution in physical sciences. Things like Deep Galerkin Method are truly amazing in applicability and efficiency. A lot has already been done using data driven methods. Equation based methods are relatively new and, perhaps, more useful in physical sciences. Other languages have a headstart from their vast libraries of traditional methods. Still, Fortran could find a window of opportunity, which is anyway short lived in fast moving fields like ML & NN.

4 Likes

To be honest, I think the elephant in the room when it comes to neural networks is GPU support. I cannot really see a Fortran implementation being competitive unless it is comparable in speed to something like cuDNN? Or maybe you can interface to it via Fortran?

3 Likes

You’re right, I forgot about that important detail. And writing or re-using GPU kernels is at least feasible. I’m coming from the position that if there’s need and desire, an open source Fortran framework a-la Tensorflow is not only not insane, but a good idea. We could easily have it in 3-5 years if more Fortran programmers began to incorporate machine learning in their daily work.

2 Likes

The way I understand it a Fortran NN library supporting GPU’s would simply provide an interface to specialized hardware primitives like cuDNN, the Intel® oneAPI Deep Neural Network Library, or AMD’s ROCm and not necessarily re-implement the algorithms itself.

From the cuDNN page:

Deep learning researchers and framework developers worldwide rely on cuDNN for high-performance GPU acceleration. It allows them to focus on training neural networks and developing software applications rather than spending time on low-level GPU performance tuning. cuDNN accelerates widely used deep learning frameworks, including Caffe2, Chainer, Keras, MATLAB, MxNet, PaddlePaddle, PyTorch, and TensorFlow.

If you look at PyTorch (pytorch.org) they support 4 different compute platforms:

With Fortran you would just need to put the code calling the accelerator primitives into different submodules. If you swap from one backend to second one, you just need to relink your application.

6 Likes

You are right but that is mostly necessary for training. But I could perfectly imagine training a network in a Python script but calling the trained network from Fortran even on a cluster without any GPU in the computational nodes present. Of course that depends on what exactly one wants to do.

3 Likes

With one vendor offloading simple, standard conformant do concurrent constructs to the GPU automatically (see this post), Fortran may be the way to write long-lived code for machine learning. It’s worked well for high-performance mathematical libraries in the past.

4 Likes

If large system contracts (LANL, NERSC, ORNL, …) required that compiler vendors support offload of DO CONCURRENT to accelerator hardware, then a lot more vendors would provide that option. Conceptually it is not that hard for a compiler that already supports offload via OpenMP or OpenACC directives.

1 Like

That is a great topic.

We do need NN and machine learning applications written by modern Fortran as more as possible, that is without question.

If someone could show one small minimal working example about NN that modern Fortran could beat Pytorch in terms of speed. It could be great.
Some very good small NN examples to begin with, could be found in Andrew Ag’s Coursera course such as
https://www.coursera.org/specializations/deep-learning?
Foe example, training a NN to recognize a cat, etc. Those exercises written by Python iare small enough and can be written in modern Fortran.

Furthermore, to go to NN, we may also need to learn/know how to use Fortran to fully utilize power of GPU.

On the other hand, from business point of view, in order to making Fortran continue to survive and thrive, one have to let more and more people use Fortran directly or indirectly, partially or fully, as easy as possible. I do agree with @certik at thread #4

To begin with making Fortran more easily to be called from other languages such as Python/Julia, or call them within Fortran, is a very important/urgent thing.

2 Likes

Dmitry Alexeev of Nvidia has just released a Fortran wrapper to the Pytorch libraries.

9 Likes