See
How much faster is the Fortran version than the Python version (ill-defined question, I know)? Since the SciPy developers are much more proficient in Python than Fortran, maybe their decision makes sense. People who need better performance can use the Fortran version.
Itās interesting that they claim bit for bit matching for the test cases compared to the old F77 versions.
Itās interesting that they claim bit for bit matching for the test cases compared to the old F77 versions.
Note that the Python version does not match the old F77 version, which has been deprecated.
I do not think anyone can work out a correct Python version based on the old F77 codebase with 249 GOTOs. This is exactly why I spent years on PRIMA, which provides modernized and improved reference implementation for Powellās algorithms.
The Python version is a Python translation of the modern-Fortran reference implementation coded by me in PRIMA. It matches my modern-Fortran reference implementation bitwise on about 500 test problems according to what is mentioned on GitHub.
Thatās a great news! Iām looking forward to using PRIMA in Scipy, and of course the native version in Fortran projects Iām not too concerned about the speed penalty, since I believe that in non-toy problems most of the time should be spent in evaluating the objective and constraints. Instead, Iām surprised to see that Scipy includes PRIMA via a submodule to a this new repo, where they just dumped all the code. Why not linking to the original repo? If I understand correctly, new features and bug fixes to PRIMA now have to be developed in Fortran and Python (fortunately, the C and Matlab versions are just interfaces to Fortran). So, adding a second repo just increases the likelihood of mismatches.
The release notes are also very much underselling PRIMA. A user that never heard of PRIMA before will wonder why on Earth they switched to something different. Even a simple copy of the About section in the Github repo would have been better.
Interesting. I was curious if they would use the modern Fortran version, but it looks like they donāt want to depend on Fortran. It makes sense to do everything in Python if they can. I donāt want depending on Python either in Fortran projects. Having everything in one language simplifies maintenance a lot.
Recently, I was playing with a Fortran-only method of writing a Python extension (specific to CPython). The details are a bit ugly, but it could be simplified with a few wrapper types and functions (or a bind(Python)
attribute). Writing a function to sum two integers looks like this:
function f_add(self,args) bind(c)
type(c_ptr), value :: self, args
type(c_ptr) :: f_add
type(c_ptr) :: p_a, p_b
integer(c_long) :: a, b
! No equivalent for variadic argument lists in Fortran...
! (we must do the unpacking in C, or do it one-by-one)
! res = PyArg_ParseTuple('(ii):f_add', ...)
! These are borrowed references
! We ignore type-checking, etc...
p_a = PyTuple_GetItem(args,0_py_ssize_t)
p_b = PyTuple_GetItem(args,1_py_ssize_t)
! Extract the values
a = PyLong_asLong(p_a)
b = PyLong_asLong(p_b)
! Do the sum and place the result in a PyLongObject
f_add = PyLong_FromLong( a + b )
end function
Then there is some boiler-plate to register the function as a Python module:
Full code: f_add.f90
module f_add_mod
use, intrinsic :: iso_c_binding
implicit none
integer, parameter :: py_uint32_t = c_int32_t
integer, parameter :: py_ssize_t = c_size_t
integer, parameter :: py_int64_t = c_int64_t
integer(py_ssize_t), parameter :: py_statically_allocated_flag = shiftl(1, 7)
integer(py_ssize_t), parameter :: py_immortal_initial_refcnt = shiftl(3, 30)
integer(py_ssize_t), parameter :: py_static_immortal_initial_refcnt = &
ior(py_immortal_initial_refcnt, &
shiftl(py_statically_allocated_flag,32))
type, bind(c) :: object_
! This is a union of flags and refcnt
integer(py_int64_t) :: ob_refcnt_full
type(c_ptr) :: ob_type ! type(PyTypeObject)
end type
type, bind(c) :: PyModuleDef_Base
type(object_) :: ob_base
type(c_funptr) :: m_init = c_null_funptr
integer(py_ssize_t) :: m_index = 0
type(c_ptr) :: m_copy = c_null_ptr
end type
type, bind(c) :: PyModuleDef
type(PyModuleDef_Base) :: m_base
type(c_ptr) :: m_name
type(c_ptr) :: m_doc
integer(py_ssize_t) :: m_size = -1
type(c_ptr) :: m_methods = c_null_ptr
type(c_ptr) :: m_slots = c_null_ptr
type(c_funptr) :: m_traverse = c_null_funptr
type(c_funptr) :: m_clear = c_null_funptr
type(c_funptr) :: m_free = c_null_funptr
end type
character(len=10,kind=c_char), protected, target :: m_name = "f_add_mod"//c_null_char
character(len=36,kind=c_char), protected, target :: m_doc = &
"f_add_mod - add integers in Fortran"//c_null_char
integer(c_int), parameter :: METH_VARARGS = 1_c_int
type, bind(c) :: PyMethodDef
type(c_ptr) :: ml_name = c_null_ptr
type(c_funptr) :: ml_meth = c_null_funptr
integer(c_int) :: ml_flags = METH_VARARGS
type(c_ptr) :: ml_doc = c_null_ptr
end type
character(len=6,kind=c_char), protected, target :: f_add_str = "f_add"//c_null_char
character(len=29,kind=c_char), protected, target :: f_add_docstr = "f_add(a: int, b: int) -> int"//c_null_char
interface
function PyModule_Create2(ptr, apiver) bind(c,name="PyModule_Create2")
import c_ptr, c_int
type(c_ptr), value :: ptr
integer(c_int), value :: apiver
type(c_ptr) :: PyModule_Create2
end function
function PyTuple_Size(p) bind(c,name="PyTuple_Size")
import c_ptr, py_ssize_t
type(c_ptr), value :: p
integer(py_ssize_t) :: PyTuple_Size
end function
function PyTuple_GetItem(p,pos) bind(c,name="PyTuple_GetItem")
import c_ptr, py_ssize_t
type(c_ptr), value :: p
integer(py_ssize_t), value :: pos
type(c_ptr) :: PyTuple_GetItem
end function
function PyTuple_SetItem(p,pos,o) bind(c,name="PyTuple_SetItem")
import c_ptr, c_int, py_ssize_t
type(c_ptr), value :: p
integer(py_ssize_t), value :: pos
type(c_ptr), value :: o
integer(c_int) :: PyTuple_SetItem
end function
function PyTuple_New(len) bind(c,name="PyTuple_New")
import c_ptr, py_ssize_t
integer(py_ssize_t), value :: len
type(c_ptr) :: PyTuple_New
end function
function PyLong_AsLong(obj) bind(c,name="PyLong_AsLong")
import c_ptr, c_long
type(c_ptr), value :: obj
integer(c_long) :: PyLong_AsLong
end function
function PyLong_FromLong(v) bind(c,name="PyLong_FromLong")
import c_ptr, c_long
integer(c_long), value :: v
type(c_ptr) :: PyLong_FromLong
end function
end interface
contains
! Replaces a C macro
function PyModule_Create(mod) result(res)
type(c_ptr), value :: mod
type(c_ptr) :: res
integer(c_int), parameter :: python_abi_version = 3_c_int
res = PyModule_Create2(mod, python_abi_version)
end function
! Don't provide an external C-name, we keep it private to the module
function f_add(self,args) bind(c)
type(c_ptr), value :: self, args
type(c_ptr) :: f_add
type(c_ptr) :: p_a, p_b
integer(c_long) :: a, b
! No equivalent for variadic argument lists in Fortran...
! (we must unpack either unpack in C, or do it one-by-one)
! res = PyArg_ParseTuple('(ii):f_add', ...)
! These are borrowed references
! We ignore type-checking, etc...
p_a = PyTuple_GetItem(args,0_py_ssize_t)
p_b = PyTuple_GetItem(args,1_py_ssize_t)
! Extract the values
a = PyLong_asLong(p_a)
b = PyLong_asLong(p_b)
! Do the sum and place the result in a PyLongObject
f_add = PyLong_FromLong( a + b )
end function
function PyInit_f_add_mod() result(m) bind(c,name="PyInit_f_add_mod")
type(c_ptr) :: m
type(PyModuleDef), save, target :: f_add_mod_handle
type(PyMethodDef), save, target :: m_methods(2)
! Fill the available methods
m_methods(1) = PyMethodDef(c_loc(f_add_str), c_funloc(f_add), METH_VARARGS, c_loc(f_add_docstr))
m_methods(2) = PyMethodDef(c_null_ptr, c_null_funptr, 0_c_int, c_null_ptr)
f_add_mod_handle = PyModuleDef( &
PyModuleDef_Base(object_(py_static_immortal_initial_refcnt,c_null_ptr)), &
m_name=c_loc(m_name), &
m_doc=c_loc(m_doc), &
m_methods=c_loc(m_methods))
m = PyModule_Create(c_loc(f_add_mod_handle))
if (.not. c_associated(m)) then
m = c_null_ptr
return
end if
end function PyInit_f_add_mod
end module f_add_mod
Once youāve written the boiler-plate, you can compile into a shared library (linking with the Python runtime),
$ gfortran -shared -fPIC -o f_add_mod.so -Wall f_add.f90 -L/usr/local/Cellar/gcc/14.2.0_1/lib/gcc/current/gcc/x86_64-apple-darwin23/14 -L/usr/local/Cellar/gcc/14.2.0_1/bin/../lib/gcc/current/gcc/x86_64-apple-darwin23/14 -L/usr/local/Cellar/gcc/14.2.0_1/lib/gcc/current/gcc -L/usr/local/Cellar/gcc/14.2.0_1/bin/../lib/gcc/current/gcc -L/usr/local/Cellar/gcc/14.2.0_1/lib/gcc/current -L/usr/local/Cellar/gcc/14.2.0_1/bin/../lib/gcc/current/gcc/x86_64-apple-darwin23/14/../../.. -Wl,-dead_strip_dylibs -Wl,-headerpad_max_install_names -Wl,-undefined,dynamic_lookup -Wl,-undefined,dynamic_lookup
f_add.f90:113:23:
make the extension module visible in the path, and run it:
$ export PYTHONPATH=./ # Add current folder to Python PATH
$ python -c "import f_add_mod; print(f_add_mod.f_add(3, 5))"
8
Iām just not convinced that math should be written in a scripting language. We need to make it trivial to create these Python wrappers to our modern Fortran code. Then we can just do it for all our libraries and start to build an alternative to SciPy built upon the modern code.
I fully agree. A lot of SciPy algorithm code is transferable to Fortran. I tested this on the SciPy sparse iterative solvers recently (GitHub - ivan-pi/kiss: krylov-based iterative sparse solvers) with some help from LLMs.
The starting point would be an array class implementing the Python Array API standard. There are features which can help including assumed-rank arrays and elemental functions.
hello, creator of āthis new repoā here To answer some potential questions:
Why not link to libprima/prima?
The Python implementation is not yet integrated into @zaikunzhangās repo, see the open PR Include Python implementation of COBYLA and associated CI test by nbelakovski Ā· Pull Request #221 Ā· libprima/prima Ā· GitHub. Perhaps that will change in the future.
Why not include nbelakovski/prima as a git submodule?
While that fork of the prima repo does have all the code we need (without modification), it also contains the Fortran implementation, as well as its C, MATLAB and Python bindings (which are not the same thing as the new Python implementation). There is a concern about having too much unused code vendored in the repo for contributors to clone. Thus we use a script to vendor the code whilst stripping off the bulk of the unused parts.
Why use a separate repo for the script instead of just maintaining the script in the SciPy repo?
In hindsight this was a bad decision, there are benefits but they seem outweighed by the potential costs, see DOC: PRIMA licence and reference fix by andyfaff Ā· Pull Request #22738 Ā· scipy/scipy Ā· GitHub. But also not really worth reverting right now.
Hopefully that answers most of your questions @ricriv ! As for any ālikelihood of mismatchesā, I donāt think there is any concern about that from the SciPy side, as long as nbelakovskiās fork is kept up to date.
Please open an issue (or PR!) about this on the SciPy repo to suggest any changes!
You are right about the general sentiment, but actually that was not the problem here. Since the modern Fortran implementation was going to replace the F77 COBYLA, we were ready to take it on, and it actually looked very close to merge. Some difficulties close to the finish line were noted in ENH: optimize: integrate the PRIMA library for COBYLA by nbelakovski Ā· Pull Request #20964 Ā· scipy/scipy Ā· GitHub, but the main problem was probably just lack of reviewer bandwidth (as usual!).
Would we have subsequently switched the modern Fortran implementation for a bit-for-bit Python implementation, had the modern Fortran implementation have already been merged? Probably, yes, as we have the goal of eventually removing all Fortran, with some justification being laid out in the thread Is adding new Fortran to SciPy generally allowed? - scipy - Scientific Python. On the other hand, that goal is clearly not going to be achieved until there is a suitable replacement for LAPACK, so in a different world, perhaps the modern Fortran PRIMA would have stuck around in SciPy for just as long.
this would be very cool to see! Race to see if you can get it done before the Rust community
Itās good to have clarity. Makes it a lot easier to decide where I should spend my effort.
Btw, we test SciPy with every commit of LFortran, but it looks like it wonāt be needed much longer, so weāll remove it and test the F77 codes directly.
Well, things are never that simple of course the irony is that the overarching concern expressed in the thread I linked to is about compiler toolchains (best summarised in Is adding new Fortran to SciPy generally allowed? - #9 by rgommers - scipy - Scientific Python and Is adding new Fortran to SciPy generally allowed? - #11 by rgommers - scipy - Scientific Python), and LFortran could be a key āmissing pieceā in that respect.
We just find ourselves in a bit of a chicken-or-the-egg situation: it is hard for LFortran to commit to SciPy while the prospect of removing all Fortran looms, and it is hard for SciPy to commit to Fortran given the uncertain future of compiler toolchains (along with a few other concerns described in The difficulties of using Fortran in SciPy - #54 by ev-br).

LFortran could be a key āmissing pieceā in that respect.
Btw, LFortran compiles PRIMA fully, if it helps: LFortran compiles PRIMA -.
But letās not rehash the argument, I am happy that SciPy made a decision, even if it wasnāt in our favor. That way we all know what to do.
@zaikunzhang Iāve opened an issue on the PRIMA repository page, where I suggest to enable PRIMA to be used with the Fortran Package Manager.
I think it would have a tremendous impact on the Modern Fortran community! A method similar to stdlib
, where a separate fpm
branch is automatically deployed, could be used so that no modifications are required to the libraryās CMake folder structure.
Hi @lucascolley, thanks a lot for your reply Not wanting too much unused code in Scipy is indeed understandable. I just read about partial clone, which can work together with submodules, see
git-partial-submodule
. Maybe that could be an alternative solution.
Please open an issue (or PR!) about this on the SciPy repo to suggest any changes!
@zaikunzhang, this is your chance to make some good advertising