The PGI CDK® Cluster Development Kit® compilers and development tools enable use of networked clusters of AMD or Intel x64 processor-based workstations and servers to tackle the largest scientific computing applications. The PGI CDK includes pre-configured versions of MPI for Ethernet and InfiniBand to enable development, debugging and tuning of high-performance MPI or hybrid MPI/OpenMP applications written in Fortran, C or C++.
PGI compilers offer world-class performance and features including auto-parallelization for multicore, OpenMP directive-based parallelization, and support for the PGI Unified Binary technology. The PGI Unified Binary streamlines cross-platform support by combining into a single executable file code optimized for multiple x64 processors. This assures your applications will run correctly and with optimal performance regardless of the type of x64 processor on which they are deployed. PGI's state-of-the-art compiler optimization technologies include SSE vectorization, auto-parallelization, inter-procedural analysis and optimization, memory hierarchy optimizations, function inlining (including library functions), profile feedback optimization, CPU-specific microarchitecture optimizations and more. PGI is the ideal solution for migrating compute-intensive legacy applications from RISC/UNIX servers and workstations to 64-bit clusters.
About PGI Accelerator Compilers
PGI offers separate products for x64+accelerator and x64 only platforms. "PGI Accelerator" products—the x64+accelerator platform products—include support for the directive-based PGI Accelerator programming model and from within the PGI Accelerator Fortran compiler, support for CUDA Fortran. PGI Accelerator compilers are supported on all Intel and AMD x64 processor-based systems with CUDA-enabled NVIDIA GPUs running Linux, OS X or Windows.
PGI Accelerator compilers (including CUDA Fortran) are contained in all PGI 2010 or later download packages. Trial license keys or updated permanent license keys are required to enable the accelerator features. Contact PGI Sales for information on upgrading your current license to a PGI Accelerator license.
Debugging a cluster MPI application can be extremely challenging. The PGDBG® debugger provides a comprehensive set of graphical user interface (GUI) elements to assist you in this process. PGDBG provides the ability to separately debug and control OpenMP threads and MPI processes on your cluster. Step, Break, Run or Halt OpenMP threads or MPI processes individually, as a group, or in user-defined process/thread subsets. PGDBG can even display the state of MPI message queues, enabling you to quickly isolate and resolve message-passing deadlock bugs. Using a single integrated multi-process debugging window, PGDBG provides precise control and feedback on the state of every MPI process and OpenMP thread simultaneously, with fully integrated capabilities for debugging hybrid parallel programs that use MPI message-passing between nodes and OpenMP shared-memory parallelism within a multicore processor-based cluster node.
The main PGDBG window displays Fortran, C or C++ program source code, optionally interleaved with the corresponding x64 assembly code. In addition to the main source code window, PGDBG provides supplementary program information in a number of tabbed panels including call stack, registers, local variables, memory, a command line, events, graphical process and thread grid, status messages, MPI messages and group information. PGDBG is interoperable with the GNU gcc/g++ compilers on Linux.
PGPROF® is a powerful and easy-to-use interactive postmortem statistical analyzer for MPI parallel and OpenMP thread-parallel programs running on clusters. Use PGPROF to visualize and diagnose the performance of the components of your program. PGPROF associates execution time with the source code and instructions of your program allowing you to see where and how execution time is spent. Through resource utilization data and compiler feedback information, PGPROF also provides features for helping you to understand why certain parts of your program have high execution times.
Use PGPROF to analyze programs on multicore SMP Servers, distributed-memory clusters and hybrid clusters where each node contains multicore x64 processors. Use the PGPROF profiler to profile parallel programs, including multiprocess MPI programs, multi-threaded OpenMP programs, or a combination of both. PGPROF allows profiling at the function, source code line, and assembly instruction level for PGI-compiled Fortran, C and C++ programs. PGPROF provides views of the performance data for analysis of MPI communication, multiprocess and multi-thread load balancing, and scalability.
Using the Common Compiler Feedback Format (CCFF), PGI compilers save information about how your program was optimized, or why a particular optimization was not made. PGPROF can extract this information and associate it with source code and other performance data, enabling you to view all of this information simultaneously. PGPROF also supports a feedbackonly mode, which allows you to browse compiler feedback associated with a CCFF-enabled binary executable in the absence of a performance profile.
Each performance profile depends on the resources of the system where it is run. PGPROF provides a summary of the processor(s) and operating system(s) used by the application during any given performance experiment
PGPROF provides the information necessary for determining which functions and lines in an application are consuming the most execution time. Combined with the feedback features of the PGI compilers, PGPROF enables maximizing vectorization and performance on a single x64 processor core. PGPROF exposes performance bottlenecks in a cluster application by presenting the number of calls, aggregate message size and execution time of individual MPI function calls on a line by line basis.
Use PGPROF to merge trace files from multiple runs on different numbers of nodes to perform scalability analysis on your MPI or OpenMP application at the application, function or line level. Scalability analysis plainly displays which parts of your application are barriers to scalable performance, and where parallel tuning efforts should be focused. PGPROF displays information in easy-to-use formats such as bar-charts, percentages, counts or seconds.
On Linux, the OpenMP and MPI parallel PGDBG debugger and PGPROF performance profiler included with the PGI CDK support MPICH, MPICH2, HP-MPI and OpenMPI over Ethernet and MVAPICH over InfiniBand clusters. MPICH (including MPICH2) was developed at the Argonne National Laboratory. MPICH is an open source implementation of the Message-Passing Interface (MPI) standard. MPICH is a full implementation of MPI, so your existing MPI applications will port easily to your Linux cluster using the PGI CDK.
MVAPICH, the "MPI over InfiniBand, iWARP and RDMA-enabled Interconnects" project is led by Network-Based Computing Laboratory, Department of Computer Science and Engineering at the Ohio State University.
Request a 30 day trial of the PGI CDK by completing the PGI CDK Evaluation Request Form.
*About the PGI Roll—The PGI Roll is maintained and distributed by Stanford University. The PGI Roll contains software only. A valid PGI license is required to use the software. A valid PGI CDK license is required to enable remote MPI debugging and profiling.
A partial list of technical features supported includes the following:
Note: Heterogeneous systems that include both 32-bit and 64-bit processor-based workstations or servers are not supported.