Using PGI Accelerator compilers, programmers can accelerate applications on x64+accelerator platforms by adding OpenACC compiler directives to existing high-level standard-compliant Fortran, C and C++ programs and then recompiling with appropriate compiler options.
Sample Fortran matrix multiplication loop, tagged to be compiled for an accelerator.
!$acc kernels do k = 1,n1 do i = 1,n3 c(i,k) = 0.0 do j = 1,n2 c(i,k) = c(i,k) + a(i,j) * b(j,k) enddo enddo enddo !$acc end kernels
Until now, developers targeting HPC accelerators have had to rely on language extensions to their programs. CPU+accelerators programmers have been required to program at a detailed level including a need to understand and specify data usage information and manually construct sequences of calls to manage all movement of data between the CPU host and the accelerator.
The PGI Accelerator compilers automatically analyze whole program structure and data, split portions of the application between the host CPU and the accelerator device as specified by a standard set of user directives, and define and generate an optimized mapping of loops to automatically use the parallel cores, hardware threading capabilities and SIMD vector capabilities of modern accelerators. In addition to directives and pragmas that specify regions of code or functions to be accelerated, other directives give the programmer fine-grained control over the mapping of loops, allocation of memory, and optimization for the accelerator memory hierarchy. The PGI Accelerator compilers generate unified object files and executables that manage all movement of data to and from the accelerator while leveraging all existing host-side utilities—linker, librarians, makefiles—and require no changes to the existing standard HPC Linux programming environment.
Please also see the PGI Accelerator Programming user forum for additional questions and answers.
Q Which programming languages do the PGI Accelerator compilers support?
A PGI supports accelerators from within the PGFORTRAN Fortran 2003, PGCC® ANSI C99 and PGC++® gnu-compatible C++ compilers.
Q On which operating systems do PGI Accelerator compilers run?
A PGI Accelerator compilers run on 64-bit and 32-bit Linux, Windows and 64-bit OS X. Radeon is unsupport on OS X.
Q Which accelerators can be targeted by PGI Accelerator compilers?
A PGI Accelerator compilers target all NVIDIA Tesla GPU accelerators with compute capability 2.0 or higher. In addition, they support the following accelerators from AMD:
In addition to the accelerators listed above, beginning with PGI version 15.10, multicore x64 CPUs can also be targeted using 64-bit and 32-bit Linux, Windows and OS X. See the OpenACC on Multicore CPUs PGInsider article for more informations.
Q Do I need to install any 3rd party software?
A To use NVIDIA CUDA-enable GPUs, you must first install the CUDA driver for your system. To use AMD Radeon GPUs, you must first install the Radeon driver for your system. All other necessary 3rd party software is included in the PGI installation packages.
Q Does the compiler support IEEE standard-floating point arithmetic?
A The accelerators available today support most of the IEEE floating-point standard. However, they do not support all the rounding modes, and some operations, notably square root, exponential, logarithm, and other transcendental functions, may not deliver full precision results. This is a hardware limitation that compilers cannot overcome.
Q Do PGI Accelerator compilers support double-precision?
Q Can I call a CUDA kernel function from my PGI compiled code?
A You can call CUDA device functions from PGI-compiled OpenACC compute regions in C, C++ or Fortran. The OpenACC code would need an appropriate acc routine(...) directive to tell the compiler that the given function is available for the device, and the compile line would need to include –ta=tesla (to override the default –ta=tesla,host), because there is only a host version of that function. See the OpenACC Routine Directive Part 2 PGInsider article for more details. To invoke a CUDA kernel from Fortran, you could use the CUDA Fortran extensions. Otherwise, you would need a wrapper routine compiled by nvcc to actually launch the kernel, then call that wrapper from the PGI code. There is no syntax to directly launch a CUDA kernel from the PGI-compiled code.
Q Does the compiler support two or more accelerators in the same program?
A As with CUDA, you can use two or more GPUs by using multiple threads, where each thread attaches to a different GPU and runs its kernels on that GPU. The current release does not include support to automatically control two or more GPUs from the same accelerator region.
Q Will PGI be dropping supporting for the PGI Accelerator directive syntax?
A PGI will drop support for PGI Accelerator syntax at some point. Typically, PGI deprecates features for at least one year before dropping them.
Q Can I run my program on a machine that doesn't have an accelerator on it?
A Yes. PGI Accelerator compilers can generate PGI Unified Binary technology executables that work in the presence or absence of an accelerator.
Q Do I have to rebuild my application for each different model accelerator?
A The accelerator code generated uses the same technology that is used for graphics applications and games; that is, the program uses a portable intermediate format which is then dynamically translated and re-optimized at run time by the drivers supplied by the vendor for the particular model of GPU in your machine. This preserves your investment by allowing your programs to continue to work even when you upgrade your accelerator, or use your program on a machine with a different model.
Q Can I use function or procedure calls in my GPU code?
A PGI 2014 includes support for procedure calls (the OpenACC routine directive) on NVIDIA GPUs. Support on AMD Radeon is planned for a future release.
Q In what timeframe will PGI be including OpenMP 4.0 or 4.51 support?
A OpenMP 4.0 and 4.5 include many new features, including tasking extensions and task dependences, task groups, task cancellation, task priorities, task loops, thread binding, SIMD constructs, SIMD function compilation, user-defined reductions, additional atomic constructs, doacross-style synchronization between workshared loop iterations, plus a whole host of target/device features. PGI is planning to work on adding the tasking, binding, SIMD, synchronization, reduction, atomic and other CPU features in 2016. PGI is planning to start working on the OpenMP 4.x target features in 2017.
Q When will you support <my favorite feature> in your compiler?
A Some features cannot be supported due to limitations of the hardware. Other features are not being supported because they would not deliver satisfactory performance. Still other features are planned for future implementation. Your feedback can affect our priorities.
Q Which OpenACC features are supported in which release?
A PGI 2010 and later releases include the PGI Accelerator Fortran and C99 compilers supporting x64+NVIDIA systems running under Linux, OS X and Windows. PGI introduced support for OpenACC directives with Release 2012 version 12.6 of the PGI Accelerator compilers and support for C++ was added with Release 2013. OpenACC support for AMD accelerators was added with Release 2014 and support for x64 CPUs as an accelerator target was added in the PGI Release 2015 version 15.9..
Following is a list of OpenACC 1.0 features and the PGI version they were added.
|!$acc kernels||12.3||!$acc declare||12.3|
|present()||12.6||openacc.h C hdr file||12.3|
|present_or_copy()||12.6||openacc_lib.h Ftn hdr file||12.3|
|create()||12.3||acc_malloc() for C||12.3|
|present()||12.3||acc_free() for C||12.3|
|deviceptr() in C||12.3||Environment variables:|
|deviceptr() in Ftn||14.1||ACC_DEVICE_TYPE||12.3|
|within kernels region||acc_copyout||12.6|
|within parallel region||acc_ispresent||12.6|
|Kernels clauses||!$acc routine||14.1|
|Parallel clauses||bind name()||14.7|
|Loops clauses||#pragma atomic||14.4|
Following is a list of OpenACC 2.5 features and the PGI version they were added.
|Change in the behavior of the copy, copyin, copyout and create data clauses.||15.1|
|Change in the behavior of the acc_copyin, acc_create, acc_copyout and acc_delete API routines.||15.1|
|New default(present) clause for compute constructs.||15.7|
|Asynchronous versions of the data API routines.||15.9|
|New acc_memcpy_device API routine.||15.7|
|New OpenACC interface for profile and trace tools.||Due in 16.1|
|Change in the behavior of the declare create directive with a Fortran allocatable.||15.1|
|Reference counting added to device data.||--|
|Change in exit data directive behavior. New optional finalize clause.||--|
|New update directive clause, if_present.||--|
|New init, shutdown, set directives.||--|
|Change in the routine bind clause definition.||--|
|New API routines to get and set the default async queue value.||--|
|Num_gangs, num_workers and vector_length clauses allowed on the kernels construct.||--|
Q How much does it cost?
A License pricing for the PGI Accelerator compilers can be found in the pricing section. If you are a PGI licensee with a current PGI Subsciption, you may upgrade your license in accordance with PGI's standard product upgrade policy.
Q How can I try it?
A To try out the PGI Accelerator compilers, follow these three steps:
Please contact PGI Sales for exchange, upgrade or subscription renewal information.