New Postings

One of the most important issues when programming a system with a GPU or any attached accelerator is managing data movement between host memory and device memory. This is a special case of managing a memory hierarchy, a problem that has been with us since the dawn of computing. The very first computer I programmed was an IBM 360/75 at the University of Illinois which had a whole megabyte of magnetic core memory, composed of 400KB of fast core and 600KB of slow core. Systems in those days didn't support virtual memory, so programmers were responsible for staging data from disk (or tape) storage to memory and back, and for telling the operating system the maximum amount of memory the program would use (or on this system, the maximum amount of fast and of slow core that the program would use). Continue reading ⇒

The cuSOLVER library was first included with the CUDA 7.0 toolkit. It . As a collection of libraries, cuSOLVER provides useful LAPACK-like features. Its list of features is growing with each release. The current version offers common matrix factorization and triangular solve routines for dense matrices, a sparse least-squares solver and an eigenvalue solver. In addition, cuSOLVER provides a new refactorization library useful for solving sequences of matrices with a shared sparsity pattern.

Currently, CUDA Fortran does not have a ready-to-use interface module to the cuSOLVER library. This article shows how to write such an interface and presents two examples of its use: first, a simple Cholesky factorization, and second, the reduction of a generalized eigenproblem to a standard one. Continue reading ⇒

Any software developer knows that developing and maintaining a large software project can be a difficult task, particularly when documentation is sparse and the original developers are unavailable. Sometimes a small change causes unexpected problems elsewhere in a program, and they can be very difficult to track down in a large codebase. Other times, one might want to explain how a program works to a new developer, and it would be very useful to have a visual representation of the program’s internal structure. When adapting existing HPC applications to OpenMP or OpenACC, sometimes it is difficult to prioritize what exactly should be parallelized first. These tasks can be difficult, labor intensive, or even impossible to do by hand. Continue reading ⇒


General GPU Programming

General purpose parallel programming on GPUs is a relatively recent phenomenon. GPUs were originally hardware blocks optimized for a small set of graphics operations. As demand arose for more flexibility, GPUs became increasingly more programmable. Early approaches to computing on GPUs cast computations into a graphics framework, allocating buffers (arrays) and writing shaders (kernel functions). Several research projects looked at designing languages to simplify this task; in late 2006, NVIDIA introduced its CUDA architecture and tools to make data parallel computing on a GPU more straightforward. Not surprisingly, the data parallel features of CUDA map pretty well to the data parallelism available on NVIDIA GPUs. Here, we'll describe the data parallelism model supported in CUDA and the latest NVIDIA Kepler GPUs.

Why should PGI users want to understand the CUDA threading model? Clearly, PGI CUDA Fortran users should want to learn enough to tune their kernels. Programmers using the directive-based PGI Accelerator Compilers with OpenACC will also find it instructive in order to understand and use the compiler feedback (-Minfo messages) indicating which loops were scheduled to run in parallel or vector mode on the GPU; it's also important to know how to tune performance using the loop mapping clauses. So, let's start with an overview of the hardware in today's NVIDIA GPUs. Continue reading ⇒


Today's high performance systems are trending towards using highly parallel accelerators to meet performance goals and power and price limits. The most popular compute accelerators today are NVIDIA GPUs. Intel Xeon Phi coprocessors and AMD Radeon GPUs are competing for that same market, meaning we will soon be programming and tuning for a wider variety of host + accelerator systems.

We want to avoid writing a different program for each type of accelerator. There are at least three current options for writing a single program that targets multiple accelerator types. One is to use a library, which works really well if the library contains all the primitives your application needs. Solutions built on class libraries with managed data structures are really another method to implement libraries, and again work well if the primitives suit your application. The potential downside is that you depend on the library implementer to support each of your targets now and in the future. Continue reading ⇒


OpenACC

The OpenACC API has two compute constructs, the kernels construct and the parallel construct. This article describes the differences between the two and use cases for each. To simplify the discussion, I will focus this article only on OpenACC gang and vector parallelism. OpenACC also supports worker parallelism, but that will be the subject of yet another article. I will also ignore any data movement by assuming that all required data is on the device. Continue reading ⇒


One of the key features of high level language programming is modularity, including support for procedures and separate compilation. It's hard to imagine modern programming without functions and libraries. The term compiler was originally used to define the software that compiled separately created external objects into a single binary, what we now call a linker. Yet, until recently, OpenACC programs could only support procedures through inlining, more or less preventing any use of libraries or procedure calls across multiple files.

With the latest releases, PGI now supports procedure calls, separate compilation and linking for OpenACC programs targeting NVIDIA GPU accelerators. This article introduces this very important feature and how to use the acc routine directive to enable it. I will also present hints on how to use the clauses on the routine directive, including reasons for why the clauses are necessary, and some caveats and current limitations. Continue reading ⇒


In Part 1 I introduced the OpenACC routine directive and its use to enable true procedure calls and separate compilation in OpenACC programs. This article will discuss a few more advanced issues: support for global variables and the acc declare directive, interfacing to CUDA C and CUDA Fortran device functions using acc routine declarations, and using acc routine in C++ class member functions. Continue reading ⇒


OpenACC is designed as a parallel programming model that can deliver high performance on a wide range of systems, including accelerated systems with GPUs,multicore CPUs, and manycore processors. Until recently, PGI has focused its OpenACC development efforts on the NVIDIA Tesla and AMD Radeon GPU targets. Performance on these two different GPUs is comparable using OpenACC, but until now performane portability of OpenACC on conventional multicore microprocessors has not been demonstrated. That changes with the latest release of the PGI Accelerator compilers. Continue reading ⇒


CUDA Fortran

NVIDIA CUDA™ is a general purpose parallel programming architecture with compilers and libraries to support programming of NVIDIA GPUs. The CUDA SDK includes an extended C compiler, here called CUDA C, allowing GPU programming from a high level language. The CUDA programming model supports four key abstractions: cooperating threads organized into thread groups, shared memory and barrier synchronization within thread groups, and coordinated independent thread groups organized into a grid.

PGI and NVIDIA defined CUDA Fortran, which is supported in the upcoming PGI 2010 release, to enable CUDA programming directly in Fortran. CUDA Fortran is a small set of extensions to Fortran that supports and is built upon CUDA. The extensions allow the following actions in a Fortran program:

  • Declaration of variables that reside in GPU device memory
  • Dynamic allocation of data in GPU device memory
  • Copying of data from host memory to GPU memory, and back
  • Invocation of GPU subroutines from the host

A CUDA programmer partitions a program into coarse grain blocks that can be executed in parallel. Each block is partitioned into fine grain threads, which can cooperate using shared memory and barrier synchronization. A properly designed CUDA program will run on any CUDA-enabled GPU, regardless of the number of available processor cores. This article will teach you the basics of CUDA Fortran programming and enable you to quickly begin writing your own CUDA Fortran programs. Continue reading ⇒


In CUDA Fortran, data transfers in either direction between the host and device using Fortran assignment statements or the function cudaMemcpy() are blocking transfers; that is, control is returned to the host thread only after the data transfer is complete. The cudaMemcpyAsync() function is a non-blocking variant in which control is returned immediately to the host thread. In contrast with assignment statements or cudaMemcpy(), the asynchronous transfer version requires pinned host memory, and it contains an additional argument, a stream ID. A stream is simply a sequence of operations that are performed in order on the device. Operations in different streams can be interleaved and in some cases overlapped—a property that can be used to hide data transfers between the host and the device. Continue reading ⇒


Fortran Programming

Polymorphism is a term used in software development to describe a variety of techniques employed by programmers to create flexible and reusable software components. The term is Greek and it loosely translates to "many forms".

In programming languages, a polymorphic object is an entity, such as a variable or a procedure, that can hold or operate on values of differing types during the program's execution. Because a polymorphic object can operate on a variety of values and types, it can also be used in a variety of programs, sometimes with little or no change by the programmer. The idea of write once, run many, also known as code reusability, is an important characteristic to the programming paradigm known as Object-Oriented Programming (OOP).

OOP describes an approach to programming where a program is viewed as a collection of interacting, but mostly independent software components. These software components are known as objects in OOP and they are typically implemented in a programming language as an entity that encapsulates both data and procedures. Continue reading ⇒


This is the second part to a series of articles that explore Object-Oriented Programming (OOP) in Fortran 2003 (F2003). The first installment introduced the OOP paradigm and three important features to OOP: inheritance, polymorphism, and information hiding. F2003 supports inheritance through type extension, polymorphism through its CLASS keyword, and information hiding through its PUBLIC/PRIVATE keywords/binding-attributes.

There are two basic types of polymorphism: procedure polymorphism and data polymorphism. Part one of the series covered procedure polymorphism, which deals with procedures that can operate on a variety of data types and values. Data polymorphism, a topic for this article, deals with program variables that can store and operate on a variety of data types and values.

In addition to data polymorphism, we will also examine F2003's typed allocation, sourced allocation, unlimited polymorphic objects, generic type-bound procedures, abstract types, and deferred bindings.
Continue reading ⇒


Part three introduces an F2003 feature known as parameterized derived types. Parameterized derived types allow the programmer to create derived types that take one or more values, known as parameters, to specify characteristics of the data encapsulated by the derived type. These parameters are supplied by the user of the derived type to specify the kind and/or amount of data needed by the derived type. We will first look at features and syntax of parameterized derived types. Then we will conclude our discussion with a case study that uses parameterized derived types to create general purpose matrices. Continue reading ⇒


Tutorials and Hands-On

Since the introduction of PGI CUDA Fortran late last year, we've seen a dramatic rise in the number of customers using this new extension to the Fortran language. As the moderator of the PGI User Forum, I have been very busy answering questions about the language, and noting those questions that seem to be asked often or may be of interest to the wider community. For this installment of the PGInsider, I have implemented the Monte Carlo Integration algorithm to highlight some of the tips, tricks, and traps of programming for the GPU. Continue reading ⇒


Click me