Fast.ai - Mojo may be the biggest programming language advance in decades

Another opponent of Fortran? The word “Fortran” appears once in the article, unfortunately in all capitals.

See also

Very interesting, if I understand this correctly it means a shift for Python (Mojo) towards becoming a low-level programming language.

So far, the situation (landscape) for low-level programming languages vs. higher-level programming languages was explained as such:

https://devm.io/java/tornado-vm-java-162460 :

The main challenge of hardware specialization is programmability . Most likely, each heterogeneous hardware has its own programming model and its own parallel programming language. Standards such as OpenCL, SYCL, and map-reduce frameworks facilitate programming for new and/or parallel hardware. However, many of these parallel programming frameworks have been created for low-level programming languages such as Fortran, C, and C++.
Although these programming languages are still widely used, the reality is that industry and academia tend to use higher-level programming languages such as Java, Python, Ruby, R, and Javascript. Therefore, the question now is,
how to use new heterogeneous hardware from those high-level programming languages ?
There are currently two main solutions to this question: a) via external libraries, in which users might be limited to only a set of well-known functions; and b) via a wrapper that exposes low-level parallel hardware details into the high-level programs (e.g., JOCL is a wrapper to program OpenCL from Java in which developers need to know the OpenCL programming model, data management, thread scheduling, etc.). However, many potential users of these new parallel and heterogeneous hardware are not necessarily experts on parallel computing, and perhaps, a much easier solution is required.”

A new AI company called Modular is promising to deliver a new matrix-multiplication library/API or something similar. More in their two blog posts:

According to an announcement on their website, in a video-stream on May 2 we can expect to learn about

“The world’s fastest unified AI execution engine
A new programming language that gives Python superpowers
A new way to unlock hardware”

The co-founder and CEO of the company is Chris Lattner, known for his work on LLVM, Clang, MLIR, and the Swift programming language.

Can we really expect a matrix library, faster then the vendor versions? If yes, when will all the recent develompents trickle down to Fortran?

2 Likes

Thanks, I think that’s exciting.

Yes. Julia already showed that it is possible: Matrix Multiplication · LoopVectorization.jl, they are generally faster than MKL.

If their product is open source and usable as a library, then I think we can start using it as a backend in LFortran right away. Well, after LFortran can compile fastGPT that is. :wink: Soon.

2 Likes

Welcome to the future of AI with the new and shiny language - Mojo :fire: :

2 Likes

So this is the latest language that’s faster than Fortran but with the usability of a scripting language? Guess I’d better start converting all my Julia code that I converted from all my Python code that I converted from all my Matlab code that I converted from all my Fortran code. Just kidding I never stopped writing Fortran. :rofl::joy::saluting_face:

12 Likes

Mojo is a very serious competitor to LPython: GitHub - lcompilers/lpython: Python compiler, but the LFortran and LPython combo is very quite unique and our goal is to make LCompilers very capable for the AI workflow (such as fastGPT) as well, it must run at maximum performance. We need things like automatic differentiation (for training) and whatever else is needed, including running in accelerators. But I am taking it one step at a time, for LFortran it is to compile codes. This creates the foundation to build upon.

2 Likes

I merged previews discussion from the fastGPT thread here regarding Mojo.

1 Like

Regarding the article fast.ai - Mojo may be the biggest programming language advance in decades, the last section is “Alternatives to Mojo”, where the author compares Julia, Numba, Cython, Jax. I agree with the assessment such as:

I’m really grateful Numba and Cython exist, and have personally gotten a lot out of them. However they’re not at all the same as using a complete language and compiler that generates standalone binaries.

However, the author fails to mention two other alternatives that are the same thing (generate standalone binary; complete language):

Disclaimer: I am the lead developer of LPython.

Both LPython and Codon were released before Mojo and are already open source (unlike Mojo).

In my opinion, LPython, Codon and Mojo are pretty much the same idea: add types to Python and compile it like a standalone language, while being semantically equivalent to Python. All three fix Python’s performance and deployment problems.

2 Likes

A recent blog post Compiling typed Python | Max Bernstein explains why adding types to Python to speed it up is tricky.

2 Likes

An alpha version of LPython has been released: LPython: Novel, Fast, Retargetable Python Compiler -

Based on the novel Abstract Semantic Representation (ASR) shared with LFortran, LPython’s intermediate optimizations are independent of the backends and frontends. The two compilers, LPython and LFortran, share all benefits of improvements at the ASR level. “Speed” is the chief tenet of the LPython project. Our objective is to produce a compiler that both runs exceptionally fast and generates exceptionally fast code.

In this blog, we describe features of LPython including Ahead-of-Time (AoT) compilation, JIT compilation, and interoperability with CPython. We also showcase LPython’s performance against its competitors such as Numba and C++ via several benchmarks.

4 Likes

Btw, I figured out how to think about this:

Mojo is a strict superset of Python, while LPython is a strict subset of Python.

With all the pros and cons that follow from this: Since any LPython code is just Python, you can use the existing Python tools with it (and it runs with CPython). But you need to modify your existing Python code to use with LPython (although we give you very nice messages), and in return you get Fortran speed. Mojo is the complementary approach: any Python code is Mojo code, so you don’t need to modify your code, although it won’t run at top speed, for that you must gradually modify it as well.

Finally, LFortran is a strict superset of Fortran. So any Fortran code will work with LFortran (eventually, once we enter production). But you shouldn’t need to modify your existing Fortran code to run fast with LFortran (this is different from Python, since Python by default is not fast, while Fortran was designed to be fast).

7 Likes

In the LPython announcement, the only restriction mentioned for the Python code is that it be type-annotated. Pyccel also handles a subset of Python, requiring that function arguments be annotated by type and rank, that the type of a variable cannot be changed, and that the type of a variable cannot depend on an if condition. I’d guess that LPython has similar requirements. A nice feature of Pyccel is type inference:

Pyccel uses type inference to calculate the type of all variables through a static analysis of the Python code. It understands all Python literals, and computes the type of an expression that contains previously processed variables. For example:

x = 3        # int
y = x * 0.5  # float
z = y + 1j   # complex

Based on the Python code examples in the blog, it appears that LFortran does not infer types. Pyccel can generate C or Fortran code from Python, while LFortran can generate C, C++, LLVM, or WASM. Pyccel is a transpiler rather than a compiler, and it could be used in conjunction with LFortran (or another Fortran compiler) that compiles the code it generates. Much of my Python code is unannotated. To get it working with LPython, I may try GitHub - Instagram/MonkeyType: A Python library that generates static type annotations by collecting runtime types,

1 Like

We plan to add a Fortran backend to ASR, so that both LPython and LFortran can use it.

Regarding implicit typing, implicit casting and implicit declarations, that is a complex issue, for now see this Document design about implicit casting and implicit typing · Issue #2168 · lcompilers/lpython · GitHub and my reply in HN: I would say LPython, Codon, Mojo and Taichi are structured similarly as compiler... | Hacker News.

An Aug 15 interview of @certik on LPython has been posted to YouTube. LFortran is also discussed.

Separately, Anaconda Engineering has just blogged about Numba, Mojo, and Compilers for Numerical Computing.

4 Likes

Thanks for sharing that article. It doubled as a good intro to Numba for me. Some interesting sections for me were:

The most important part of any numerical computing system in 2023 [emphasis mine] is the multidimensional array. […] Moreover, arrays are very compiler-friendly data structures, which has created opportunities for many compilers and compiler technologies.

With no array type to operate on, Mojo is currently lacking in all the array functions that one would expect from a numerical computing language. These can surely be added, and even implemented directly in the Mojo language, but they are not there now. Hopefully Modular will learn from the recently created Python Array API standard and consider using that as the basis for their user-facing array API.

Multidimensional array … numerical computing … standard. Sounds familiar to Fortran, no? It reminds me of a passage from the book of Ecclesiastes:

“The thing that hath been, it is that which shall be; and that which is done is that which shall be done: and there is no new thing under the sun.”

3 Likes

FYI:

IS MOJO THE FORTRAN FOR AI PROGRAMMING, OR MORE?

The raise of AI is definetely a big chance for Fortran. The question is, will the opportunity be used as long as the topic is hot, or are we still hoping generics will arrive before 2040. The only reason I stick to Fortran is somewhat emotional decision, since writing robust code is very frustrating in this language. For any job that actually needs to be done and maintained, I can see how it is not a relevent option.

The Juila article states that the matrices were square. I saw in another video that another group also wrote matrix multiplication code that are faster than MKL , but even in their case, the matrices were square. MKL GEMM supporting non-square matrices probably means they had to forego some optimizations, and it slows it down little bit.