Technical News from The Portland Group

 

In This Issue | MAR 2013

PGI C++ with OpenACC

CUDA 5 Features in PGI CUDA Fortran 2013

Using CULA with PGI Fortran

Writing Efficient OpenCL Code for Android Using PGCL

Upcoming Events

PGI will be exhibiting in booth #407 at the GPU Technology Conference March 18-21 in San Jose, California. PGI will also be leading a number of sessions including:

S3522 - Hands-on Lab: CUDA Fortran - Getting Started
16:00 Tues., Room 230A

S3447 - OpenACC 2.0 and the PGI Accelerator Compilers
14:00 Wed., Room 210E

S3448 - Kepler and CUDA 5 Support in CUDA Fortran
09:00 Thur., Room 210A

S3533 - Hands-on Lab: OpenACC Optimization
15:00 Thur., Room 230A

Register using PGI's discount code GMNVE175042TKK7, and receive 10% off.

PGI will also be participating in the International Supercomputing Conference in Leipzig, Germany
16-21 June.

Resources

PGI Accelerator with OpenACC
Getting Started Guide

PGI Accelerator

CUDA Fortran

PGI User Forums

Recent News

PGI 2013 Released

PGI to Deliver OpenACC for Intel Xeon Phi

PGI and AMD Collaborate on APU Compilers

Next Issue

OpenACC 2.0

OpenACC Success Stories

Porting a GPGPU application to multi-core ARM with PGCL

PGI Accelerator Programming Model v2.0

The Portland Group, Inc.
Suite 320
Two Centerpointe Drive
Lake Oswego, OR 97035

Michael Wolfe

PGI C++ with OpenACC

Michael Wolfe's
Programming Guide

Since PGI first introduced the PGI Accelerator compilers for GPU programming back in 2009, users have been asking for C++ support. With PGI 2013 just released, it's finally here.

In this article, Michael Wolfe looks at using the OpenACC capabilities in the new GNU-compatible PGI C++ compiler. Written primarily for C++ programmers new to OpenACC, Michael uses a code walk-through and charts performance progress to illustrate key concepts including pragma usage, structuring and managing data, interpreting compiler messages and tuning basics. He also addresses potential pitfalls commonly found in C++ codes, and he looks at some limitations in the current implementation and some potential means to work around them. | Continue to the article…

CUDA 5 Features in PGI CUDA Fortran 2013

CUDA 5 and the latest Kepler K20 GPU accelerators from NVIDIA introduce a number of exciting new capabilities of particular interest in HPC. At the top of the list is what NVIDIA calls separate compilation. Where before all function calls had to be inlined, the new CUDA linker enables CUDA developers to call routines and libraries in much the same way host programs do. In this article we present a separate compilation example in CUDA Fortran using the multiple precision libraries from David Bailey.

In addition, this article looks at how to use the new dynamic parallelism feature available on the Kepler K20 to run nested kernels using CUDA Fortran. It includes an example of using the new dynamic parallelism-enabled CUBLAS library as well. | Continue to the article

Using CULA with PGI Fortran

John Humphrey from EM Photonics walks through the techniques and trade-offs of using each of the three different programming interfaces available in the CULA Dense and CULA Sparse high-performance CUDA-enabled linear algebra libraries. Of likely interest to CUDA developers is calling the libraries' device interface from within CUDA Fortran. | Continue to the article

Writing Efficient OpenCL Code for Android Using PGCL

This article takes an in-depth look at the OpenCL execution model and details the PGCL runtime implementation for multi-core CPUs as an OpenCL compute device. It looks at the trade-offs and advantages of using different size work-item and work-group configurations. Armed with an understanding of the PGCL kernel execution model, five guidelines are identified for writing efficient OpenCL code for CPU-like devices. | Continue to the article

Let the Chips Fall Where They May

Tell us your story and you can receive a free limited edition PGI t-shirt.