My feeling has been that interoperability has received somewhat one-sided attention (calling from Julia is significantly more evoluted). But I do not think this is done with any malice. Rather it’s just that Julians are of course directly interested in calling widespread (mostly python, c++ or python-wrapped c++ I’d say) external libraries and as such reserve a lot of development time to that functionality. The reverse is more of a chore, and an ungrateful one, mostly because of the time-to-first-something problem. No one wants to start a Julia session inside their exe/script.
So, especially having this in mind, I find this
a very, very, very welcome surprise. I knew already about StaticCompiler.jl
and heard a lot about its limitation. It’s very nice to see it accomplish so well here, this is an important milestone (in my view at least). I’m glad we are already here, even if for simple, single objects. Then I have a question, about deployment: here you show how to compile from an opened REPL, how would it play within, say, a CMake build system? I mean, I would prefer to build the Julia object together with all the rest of the Fortran library / application that is calling it. (sorry if the question it’s silly, I realize that it could well be, I’m just very curious about what seems a very nice possibility). If it’s possible within a CMake build, we should be able at some point to do that within fpm (the fortran package manager) too, I guess.
Anyway I’m installing Julia right now on my HPC account, with the goal to build your example and try profiling that (vs a call to the intrinsic) all within fortran. That way we could see if there is really some imbalance due to ‘who calls who’ or not, and if any, quantify it.
Well if you want to take offense for debatable comments / unpleasant jokes about Julia maybe don’t to the same with other languages, right?
More seriously, being both open source and with similar “target” I actually think pythonistas are an easier catch than people used to matlab: everything is done under the hood by MathWorks engineers, you have no explicit control on many things and just benefit from trade offs and optimizations done by that benevolus dictator. Within julia you have a lot of manual switches, being it @inline
, @inbounds
, Threads.@threads
or whataver. That has a lot more resemblance to what is the experience of the average pythonista, fighting (or playing with joy) with numba
or similar ways to overcome the performance problems you have when you are not just calling into numpy/scipy, with respect to the ‘know nothing, run like a black-box’ thing that matlab gives. A matlab guy normally know nothing about performance tricks, except from what MathWorks documents (i.e. criptic statements about vectorization, avoiding loops, etc), so he/she would feel totally lost and uninterested in dealing with all Julia’s details and complexities. That’s at least what comes to my mind, as someone that has coded a lot in matlab and now is progressively transitioning to modern fortran.