I’ve got python implementation for a simple neural network using mainly numpy and trying to convert to fortran but been stuck for days. any suggestions or assistance will be appreciated
That tutorial could help:
While preparing his book Modern Fortran, Milan Courcic retained a dedicated GitHub repository neural-fortran. Beliavsky’s survey of Fortran projects on GitHub equally features a section dedicated to this topic. (It is difficult to provide a specific answer if the question remains vague and somewhat fuzzy.)
Your question lacks specificity. However, some time ago, I translated a Rust-based neural network into Fortran primarily using ChatGPT and manually correcting errors.
Given the improvement in this tool, ChatGPT may be your optimal choice. Nevertheless, if I were in your position, I would first modify the Python code to eliminate any external libraries.
Additionally, as pointed out by @nbehrnd, you might want to consider trying out neural-fortran. Unfortunately, my Fortran code is closed-source. Nevertheless, a few days ago, I initiated the development of a Lua-based deep learning program (view it here – it may contain bugs since the code is only several days old). Perhaps, having this in Fortran would be intriguing as well, but do we really need more options when neural-fortran is already well-established within the community?
Could you please share more about your code?
Since you mentioned that you’re mainly using numpy, I’m blindly assuming (and kindly correct me if I’m wrong) that your code is pedagogical in nature, or if it’s for industrial applications, you’re not using advanced Neural Network libraries for Python.
If your code is mainly simple linear algebra code, then yes, it can be easily converted to Fortran, but if you’re using advanced Neural Network libraries in Python like Tensorflow, Keras, etc. then it will require more efforts from your side to convert the code to Fortran.
If it helps, I have the same transformer implemented in python (with numpy) and in Fortran. See https://github.com/rbitr/llama2.ipynb/blob/master/llama2.ipynb and https://github.com/rbitr/llama2.f90/blob/original_master/llama2.f90 the transformer function in each. They should be pretty close translations, but it really depends on what you’re stuck on.
And here is how to implement GPT-2 transformer in Fortran: GitHub - certik/fastGPT: Fast GPT-2 inference written in Fortran, translated from a Python NumPy code (picoGPT).
If you are only using basic NumPy functionalities (array, array operations, etc) then you may be able to use Pyccel to do this automatically:
Pyccel also creates an interface which will let you call your Fortran code from Python which should allow you to do the work bit-by-bit if necessary
Disclaimer: I am one of the main developers of Pyccel