GPU Programming Model vs. Vendor Compatibility Overview (preprint)

Many Cores, Many Models: GPU Programming Model vs. Vendor Compatibility Overview
by Andreas Herten
arXiv 12 Sep 2023

In recent history, GPUs became a key driver of compute performance in HPC. With the installation of the Frontier supercomputer, they became the enablers of the Exascale era; further largest-scale installations are in progress (Aurora, El Capitan, JUPITER). But the early-day dominance by NVIDIA and their CUDA programming model has changed: The current HPC GPU landscape features three vendors (AMD, Intel, NVIDIA), each with native and derived programming models. The choices are ample, but not all models are supported on all platforms, especially if support for Fortran is needed; in addition, some restrictions might apply. It is hard for scientific programmers to navigate this abundance of choices and limits.

This paper gives a guide by matching the GPU platforms with supported programming models, presented in a concise table and further elaborated in detailed comments. An assessment is made regarding the level of support of a model on a platform.

This paper presented a methodology to categorize the support of
programming models on HPC GPU devices, assessing the level of
support and the provider (vendor or third-party). The results for a
number of selected models on GPUs of three vendors (AMD, Intel,
NVIDIA) were presented in Figure 1, accompanied by extensive
descriptions in section 4. The limitations of the method and some
key caveats of the presentation were discussed in section 5.

The support for NVIDIA GPUs can be considered most comprehensive,
founded in their long-time prevalence in the field.CUDA
is possibly the most famous GPU programming model, and both
other vendors (AMD, Intel) provide tools for converting CUDA
C/C++ to their nativemodel (HIP, SYCL). AMDdesigned HIP closely
to mimic CUDA-like programming and enable it other platforms.
And, indeed, NVIDIA and AMD GPUs can be used from the same
source code, and recently also Intel GPUs with chipStar. SYCL is an
entirely different programming model compared to CUDA or HIP,
but it also supports all three GPU platform; either by the work
by Intel or the community (Open SYCL). While OpenACC can be
used on NVIDIA and AMD GPUs, support for Intel GPUs does not
exist. OpenMP, on the other hand, is supported on all three platforms
– and even for both C++ and Fortran. Standard language
parallelism appears to be the model with the fastest change at the
moment, with multiple new projects in progress for all three platforms.
Kokkos and Alpaka both provide higher-level abstractions
and support all three platform. Python, a somewhat outlier in the
list, is also well-supported by all three platforms.

While the C++ support appears to be well on the way to good
compatibility and portability, the situation looks severely different
for Fortran. The only natively supported programming model on
all three platforms is OpenMP.


Nice, this is a formalized version of the original table by the same author here:

1 Like