Does preventing the use of global variables make the code faster?

I am translating my Fortran code to Julia.
From what I learned from Julia guys, they told me by all means prevent using global variables. Using global variables slow down the code.
I wonder, is it true in Fortran?

For example, I usually put some subroutines in a module, and at the beginning of the module, I have many global variables, like, below

module Mod
use constants
use mixture
use formats
implicit none 
real(kind=r8), private, allocatable, save :: Yji(:,:),t(:)
integer(kind=i8), private, save :: imode,iter,itermax &
                                  ,msample,mgauss,mgauss_tot,mgauss_max,mgauss_min,nsub,mi,mitot,dim_p,kmix 
real(kind=r8), private, save :: tsum,mu1,sig1,mu2,sig2,muV,SigV,w1,w2,sig,sig_inv,sigsq2_inv,D &
							   ,normpdf_factor_pYq_i,log_normpdf_factor_pYq_i,sigma_factor
real(kind=r8), private, allocatable, save :: LL_iter(:),muk_iter(:,:),sigk_iter(:,:) &
											,muV_iter(:,:),sigV_iter(:,:),wk_iter(:,:),sigma_iter(:) &
											,mukerror_iter(:,:),sigkerror_iter(:,:) &
											,muVerror_iter(:,:),sigVerror_iter(:,:),sigmaerror_iter(:) &
											,ar_gik(:),ar_gik_k(:,:) &
											,thetas(:,:,:),thetaspr(:,:,:) &
											,norm(:),norm_sig(:),normerr(:),log_norm(:) &
											,pYq_ikm(:,:,:),hi_diag(:,:,:,:),yimhi(:,:,:,:) &
											,pYq_km(:,:),log_pYq_km(:,:),h_diag(:,:,:),ymh(:,:,:) &
											,pYq_ikm_o(:,:,:)

real(kind=r8), private, allocatable, save ::wnow(:),log_wnow(:),nik(:,:),nik_sig(:,:),wknik(:,:),wknik_sig(:,:),tauik(:,:) 
integer(kind=i8), private, allocatable, save :: isub_Metroplis_gik_all(:,:),mgauss_ik(:,:)

contains 
subroutine init(...)
...
end subroutine init

subroutine calc(...)
...
end subroutine calc
...
end module Mod

I storage many variables and arrays as global(private) in the module. I wonder, is this a bad habbit?
Do you guys also prevent using global variables as best as you can, and put those stuff in the arguments of the subroutine instead?

If I put everything in the arguments of a subroutine, especially many arrays, will that create many unnecessary temp arrays which involve many unnecessary memory operations which can slow down the code?

Thanks in advance!

All those with

Module variables are different from global variables (the obsolescent COMMON block). Of course, the advantage of using modules are prominent when the same data and the same variables need to be operated on by multiple different subroutines. There are some good programming practices that can be done with modules, such as declaring everything in the module PRIVATE and only exposing certain variables using PUBLIC; or always using ONLY with module usage.

On the other hand, passing variables through arguments usually results in stronger type checking (I once worked on a code that cannot be compiled using GFortran 10 due to F77-style subroutines mixing togetherREAL and COMPLEX data types). From my understanding, Fortran always passes arguments by reference (unlike C), so it doesn’t really matter whether you “feed” the arrays through arguments or through modules. There are some special cases you can avoid, like passing non-contiguous array slices such that the compiler is forced to perform implicit copying, thus slowing down the code.

Unfortunately I don’t know enough Julia to reason why global variables are to be avoided like the plague in that language.

1 Like

I tend to avoid “global” variables (i.e. shared, possibly hidden state), not because of any performance reasons, but because it makes the code harder to reason about. It may imply an inherent order in which the procedures must be called, but which isn’t obvious from their interfaces. Or that calling a procedure with the same argument values may actually produce a different result, because the result depends on things besides the arguments. The fancy term for “no global variables” is referential transparency. It means that everything one (should) needs to know about calling a procedure is communicated via its interface.

6 Likes

I do try to avoid module variables and to pass variables as arguments instead. Looking at your code, there are so many variables defined that passing them all to various procedures could be tedious. Can related variables be grouped into derived types to make passing them simpler?

I suggest using the ONLY qualifier in USE statements so that when looking at code it is clear where a variable came from, and to avoid cluttering the namespace.

I doubt there is a performance hit to using module variables.

3 Likes

It may depend on the compiler in question, but ISTR for Intel Fortran that module variables can actually be accessed faster than other types of variables as the compiler and linker have at build time already.

(By the way, such variables are not truly global in the sense of other programming languages, as to access them you need to import them via a use statement, IIUIC.)

2 Likes

Fortran the language does not care. However, a particular Fortran compiler might. Consider a sequence of CALLs to f(args), g(args), h(args) etc. An inlining compiler could analyze this basic block as a whole, determine the data dependencies and avoid doing computations without consequence (say, f defines some data, g doesn’t care, h defines it again). Obviously, the more data entities are visible in each scope, the more work the optimizer will need to do and is more likely to give up. That is, I think, the rationale for keeping things as local as possible. Shifting to arguments (which can carry INTENTs), if a particular arg is INTENT(OUT) in f and h and the actual argument is the same, the compiler can eliminate the definition in f without much analysis.

1 Like

In Julia (like Fortran), globals are often a bad idea stylistically since they create hidden state. However, the performance impact of globals in Julia has to do with the compilation model. Since Julia lets you change the type of globals, using globals in Julia introduces type instability. For example, if in Julia I write

a=2
f(x) = x+ global a
f(6)
a=6.3
f(4)

Julia will have to look up which version of + to use at runtime, which is slow.

4 Likes

@CRquantum Thanks for the post and welcome to the forum.

Once we write a Julia translation backend to LFortran, I think you might be one of our users if you will still have Fortran code that you want to translate to Julia.

How will that backend work? I feel like translating Fortran to Julia would lose most of the advantage of Julia code since nothing would be generic, and the functions works all have different names for the same thing (eg sgemm, dgemm, cgemm vs *)

2 Likes

The LFortran project is not lacking in ambition. Have you thought about LFortran backends for Matlab/Octave, Python with Numpy (Pyccel is an active project), Cython, or R? There is much numerical code in Matlab, the language is not that different from Fortran, and potential speedups could be large. Some efforts in this area are

matlab2fmex: small translator which aims to convert numerical Matlab m-files to Fortran90 mex files, by Ben Barrowes. matlab2fmex first translates an m-file to a Fortran90 mex source file then compiles that Fortran90 file using Matlab’s mex and the local compiler.

matlab2fortran: performs some simple conversions from Matlab code to Fortran, by ebranlard

Mc2For: MATLAB to Fortran compiler, from Sable

4 Likes

Thank you!
Do you know how to permanently put Fortran kernel of LFortran into Jupyter Notebook in Anaconda?
I read the LFortran installation guide, Installation - LFortran Documentation
Just say windows, every time if I want to activate Fortran kernel in Jupyter Notebook, I need to open Anaconda cmd windows first,
then do

conda activate lf

then do

jupyter notebook

If I do not activate lf, in Jupyter notebook there is just no Fortran kernal.

Is there a way to put Fortran kernel permanently into Jupyter Notebook, just like Python, R, Julia ?

Thanks!

The goal of the backend would be for people to use it when they want to migrate away from Fortran. The backend would use whatever is the Julia way to implement a given feature. That’s the best we can do automatically. Then if you want to use some features of Julia that cannot be generated automatically, then you have to manually refactor the generated Julia code. One can always rewrite from scratch, but I think it will be very helpful to have an automatic translation that one can use as the basis for manual refactoring if needed.

Regarding the example you gave, Fortran has generic procedures which allow to use the same name and dispatch based on the argument types, so if that feature is used, we can generate an equivalent Julia code.

1 Like

When you do conda install lfortran, it automatically registers the kernel in whatever environment you are in. If you are in the lf environment, it will register it there. If you want this in your “permanent” environment, I assume that is the Conda’s base environment, then open a new terminal and type conda install lfortran, that should install it there. It works the same way as Python, R or Julia. Let me know if it does not work.

Yes, all of those. It’s just a matter of interest. So far I have seen the most interest for C++ and Julia translations. The C++ one is well on the way, and Julia will probably be the second. For Python people usually request wrappers, not a translation (both can be done). We have started the Python wrappers just yesterday in !1127.

Being able to robustly translate to other languages might actually enable many people to stay in Fortran, because they can develop in Fortran and just translate for the final application that does not want to depend on Fortran.

My man!
Indeed!
just do

conda install lfortran -c conda-forge

Nice! it works, now Fortran kernal in my Jupyter Notebook! So nice!
LFortran seems very nice!
Like, if Fortran have interactive interface, it can be somehow more convenient to check some function, subroutines in the code!
Fortran with interactive interface, LFortran, good job!

2 Likes

I misunderstood the earlier discussion. I was talking about translating other languages to Fortran.

You can use print *, "xxx", x, y, z that should work for printing. The read/write IO stuff and most other intrinsic functions (abs) should be implemented within few months. You can try trigonometric functions like sin or cos, those should work.

You can follow our progress at Merge requests · lfortran / lfortran · GitLab, see what we are working on and what features got merged into master.

2 Likes

In the specific case of Julia, that is the Holly Grail. Not specifically for Fortran, but having the possibility of generating static type code from a Julia program and compile it into a library is one the most important features the community thinks is needed for Julia achieving complete maturity.

In some cases one could think that it should be easy, as the code is complied anyway at runtime, but doing that in general with a dynamic language is another story.

@certik if I may suggest something, having a benchmarking tool like BenchmarkTools of Julia is probably the best first thing to have when LFortran turns into production. The possibility of benchmarking functions well and quickly is something that once you have it, it is hard to let it go.

3 Likes

There is also a parallel Coarray Fortran Jupyter kernel developed by the Sourcery institute.

Great thanks!