Intel oneAPI 2022.3 release

https://www.intel.com/content/www/us/en/developer/articles/release-notes/fortran-compiler-release-notes.html

A few highlights of the latest ifx 2022.2.0 compiler release:

• Fortran 2018 coarray features and coarrays with allocatable fields
Coarrays, including Fortran 2018 teams and events, are now fully supported. (F2008 and F2018 support)

• DO CONCURRENT offload support (ifx only)
-fopenmp-[no]-target-do-concurrent (Linux) or /Qopenmp-[no]-target-do-concurrent (Windows) causes the compiler to generate offload code for DO CONCURRENT constructs. The default varies: if option fopenmp-targets (Qopenmp-targets) is specified the default is ON, otherwise it is OFF. This option is available only in ifx.

Does this mean distributed computing on GPUs is now possible?

2 Likes

I believe @rouson has a project he intends to test this with real soon.

If I had a guess, do concurrent loops will be offloaded to the GPU on the node of the current image, but not distributed across nodes. So the GPU calculations would not be “distributed” but a code could be doing distributed calculations and still make use of the GPU.

I’d love to hear a detailed explanation from Intel on this :wink:

That’s what I had in mind but expressed it poorly.

I wish there was a native Mac version for the Apple Silicon processors. :cry:

2 Likes

I noticed this warning in the C++ section of their release notes:

Intel® C++ Compiler Classic (icc/icpc) is deprecated and will be removed in a oneAPI release in the second half of 2023. Intel recommends that customers transition now to using the LLVM-based Intel® oneAPI DPC++/C++ Compiler (icx/icpx) for continued Windows* and Linux* support, new language support, new language features, and optimizations.

Does this mean that mixed Fortran and C++ projects need to transition to the LLVM-based ifx compiler after this release or is it possible to use ifort along with the LLVM-based C/C++ compilers?

Latest packages for Debian/Ubuntu (https://apt.repos.intel.com/oneapi) are still 2022.2

UPDATE: package versioning seems to be misleading. As shown on the Release Notes page, the 2022.3 release of Intel oneAPI contains version 2022.2 of icx/ifx and 2021.7 of icc/ifort. And these are exact versions available for Debian/Ubuntu. So it is apparently up-to-date.

3 Likes

Attention @greenrongreen , perhaps you can comment on the above inquiry?

@plevold , you will know Intel’s Fortran community forum (Intel® Fortran Compiler - Intel Community) is a good place to get feedback on your thought above.

Intel’s LLVM compilers are ABI compatible with the Intel Classic. No changes in our ABI. Thus, you can mix objects freely between IFORT and IFX, and with ICC and ICX/DPCPP. The one caveat is objects with OMP offload - those objects use a fat binary format for the offload code.

Obviously you need to compile offload sources with ifx, but not obvious but called out in the Porting Guide is that for those objects you must use ifx OR icx as your link driver, so that our link wrapper knows to process fat objects and created fat objects.

so no, you do not need to change builds to exclusively use Intel’s LLVM compiler for any cpu targeting applications. And you do not need to recompile existing libs.

Ron.
#IAmIntel

4 Likes

Great to hear that. Thanks for explanation @greenrongreen! I hope to try out ifx at some point, but I need to get our projects up to the latest ifort version first. Baby steps…

The following is my personal opinion and open discourse and are solely my own and do not express the views or opinions of my employer, Intel Corporation.

as for DO CONCURRENT
we’re writing up a usage guide. For this and OpenMP. It’s really common sense to anyone who has done GPU offload, I mean really. You have constraints, as we do for any hardware platform.
No, you can’t do file IO, or POSIX calls in the kernel to stat or manipulate inodes. You can’t offload an MPI program and have kernels magically doing communication across nodes. You can’t drive a GUI from the GPU or interact with the user. And coarray objects, no, makes no sense - who would do that? Why would you do that, just to prove you can? Offload the local object, not some object off on another node - let that node do it’s own work. Too complicated, and complications lead to bad code and errors. Date time functions, no. Getting ENV or command line args, no. GPUs today are for compute kernels and data you offload from the attached node. They are not cpus with full functionality.

We’ll get this into documentation soon enough. This is our initial offering of this functionality. We will enhance over time within constraints of the device. And as always, documentation is lagging software development.

4 Likes

I wonder if Intel has plans for supporting GPU offloading via OpenACC, like other vendors / open-source compiler projects are striving for, or if the plan is to stick to OpenMP / standard parallelism.

Can you name another compiler vendor doing gpu offloading via openacc? The only portable parallel support I know of is openmp, mpi and coarray Fortran. These work with most compilers. openacc is only nvidia as far as I know and the Intel stuff only works if you have an Intel GPU. If people know otherwise I would be very interested to know.

I haven’t tried it but I thought that GFortran supported offloading OpenACC to some extent: OpenACC (The GNU Fortran Compiler).

HPE/Cray supports GPU offloading using OpenACC. AMD is part of the OpenACC standard’s committee since 2015, although their Fortran compiler does not seem to support it yet. My impression as far as I remember reading around the internet is that the plan of most vendors (at least AMD and NVIDIA) is to take the LLVM Flang front-end as soon as it is ready for primetime, and then they can easily support GPU offloading via OpenACC.

If you have any examples I would be very interested. Jane and I have been doing some bench marking of parallel programming using an nvidia gpu and using a conventional cpu.

AMD and Nvidia are 2 vendors. Excludes some of the vendors I use (Nag, Intel, Cray, gfortran) and other I have worked with (IBM, Nec, Fujitsu).

Indeed, though I believe most of these vendors do not support GPU offloading in any form (excluding GFortran and Cray, which support GPU offloading via OpenACC and OpenMP, as far as I know). Oh, I see that IBM, also supports it via OpenMP on NVIDIA GPUs.

If you have any examples of GPU off loading using openacc and openmp using gfortran and Cray I would be very interested in running them. I don’ have access to an IBM system any more.,

1 Like

Perhaps you can have a look at this nice project: GitHub - mrnorman/miniWeather: A parallel programming training mini app simulating weather-like flows

I don’t have time to look at this. Jane and I are currently looking at benchmarking
some common problems in Fortran using parallel programming using
openmp, mpi and coarray fortran and nvidia gpu offload.

if you have any code you have developed that addresses
parallelisation using openmp, mpi or coarray fortran
and nvidia and intel gpu offload we would be very interested.

Ian