PGI CDK Cluster Development Kit

Parallel Fortran, C and C++ Compilers & Tools for Programming HPC Clusters

The PGI CDK® Cluster Development Kit® compilers and development tools enable use of networked clusters of AMD or Intel x64 processor-based workstations and servers to tackle the largest scientific computing applications. The PGI CDK includes pre-configured versions of MPI for Ethernet and InfiniBand to enable development, debugging and tuning of high-performance MPI or hybrid MPI/OpenMP applications written in Fortran, C or C++.

Parallel Fortran, C and C++ Compilers

PGI CDK® Cluster Development Kit®: Compilers and tools to develop,  debug and tune MPI and OpenMP cluster applications PGI compilers offer world-class performance and features including auto-parallelization for multicore, OpenMP directive-based parallelization, and support for the PGI Unified Binary™ technology. The PGI Unified Binary streamlines cross-platform support by combining into a single executable file code optimized for multiple x64 processors. This assures your applications will run correctly and with optimal performance regardless of the type of x64 processor on which they are deployed. PGI's state-of-the-art compiler optimization technologies include SSE vectorization, auto-parallelization, inter-procedural analysis and optimization, memory hierarchy optimizations, function inlining (including library functions), profile feedback optimization, CPU-specific microarchitecture optimizations and more. PGI is the ideal solution for migrating compute-intensive legacy applications from RISC/UNIX servers and workstations to 64-bit clusters.

About PGI Accelerator Compilers

PGI offers separate products for x64+accelerator and x64 only platforms. "PGI Accelerator" products—the x64+accelerator platform products—include support for the directive-based PGI Accelerator programming model and from within the PGI Accelerator Fortran compiler, support for CUDA Fortran. PGI Accelerator compilers are supported on all Intel and AMD x64 processor-based systems with CUDA-enabled NVIDIA GPUs running Linux, OS X or Windows.

PGI Accelerator compilers (including CUDA Fortran) are contained in all PGI 2010 or later download packages. Trial license keys or updated permanent license keys are required to enable the accelerator features. Contact PGI Sales for information on upgrading your current license to a PGI Accelerator license.

The PGDBG OpenMP/MPI Debugger

Debugging a cluster MPI application can be extremely challenging. The PGDBG® debugger provides a comprehensive set of graphical user interface (GUI) elements to assist you in this process. PGDBG provides the ability to separately debug and control OpenMP threads and MPI processes on your cluster. Step, Break, Run or Halt OpenMP threads or MPI processes individually, as a group, or in user-defined process/thread subsets. PGDBG can even display the state of MPI message queues, enabling you to quickly isolate and resolve message-passing deadlock bugs. Using a single integrated multi-process debugging window, PGDBG provides precise control and feedback on the state of every MPI process and OpenMP thread simultaneously, with fully integrated capabilities for debugging hybrid parallel programs that use MPI message-passing between nodes and OpenMP shared-memory parallelism within a multicore processor-based cluster node.

The main PGDBG window displays Fortran, C or C++ program source code, optionally interleaved with the corresponding x64 assembly code. In addition to the main source code window, PGDBG provides supplementary program information in a number of tabbed panels including call stack, registers, local variables, memory, a command line, events, graphical process and thread grid, status messages, MPI messages and group information. PGDBG is interoperable with the GNU gcc/g++ compilers on Linux.

The PGPROF OpenMP/MPI Profiler

PGPROF® is a powerful and easy-to-use interactive postmortem statistical analyzer for MPI parallel and OpenMP thread-parallel programs running on clusters. Use PGPROF to visualize and diagnose the performance of the components of your program. PGPROF associates execution time with the source code and instructions of your program allowing you to see where and how execution time is spent. Through resource utilization data and compiler feedback information, PGPROF also provides features for helping you to understand why certain parts of your program have high execution times.

Use PGPROF to analyze programs on multicore SMP Servers, distributed-memory clusters and hybrid clusters where each node contains multicore x64 processors. Use the PGPROF profiler to profile parallel programs, including multiprocess MPI programs, multi-threaded OpenMP programs, or a combination of both. PGPROF allows profiling at the function, source code line, and assembly instruction level for PGI-compiled Fortran, C and C++ programs. PGPROF provides views of the performance data for analysis of MPI communication, multiprocess and multi-thread load balancing, and scalability.

Using the Common Compiler Feedback Format (CCFF), PGI compilers save information about how your program was optimized, or why a particular optimization was not made. PGPROF can extract this information and associate it with source code and other performance data, enabling you to view all of this information simultaneously. PGPROF also supports a feedbackonly mode, which allows you to browse compiler feedback associated with a CCFF-enabled binary executable in the absence of a performance profile.

Each performance profile depends on the resources of the system where it is run. PGPROF provides a summary of the processor(s) and operating system(s) used by the application during any given performance experiment

PGPROF provides the information necessary for determining which functions and lines in an application are consuming the most execution time. Combined with the feedback features of the PGI compilers, PGPROF enables maximizing vectorization and performance on a single x64 processor core. PGPROF exposes performance bottlenecks in a cluster application by presenting the number of calls, aggregate message size and execution time of individual MPI function calls on a line by line basis.

Use PGPROF to merge trace files from multiple runs on different numbers of nodes to perform scalability analysis on your MPI or OpenMP application at the application, function or line level. Scalability analysis plainly displays which parts of your application are barriers to scalable performance, and where parallel tuning efforts should be focused. PGPROF displays information in easy-to-use formats such as bar-charts, percentages, counts or seconds.

PGI CDK Cluster Development Kit Key Features

  • Floating multi-user seats for the PGI parallel PGFORTRAN™, PGCC® and PGC++® compilers. World-class single core and multicore processor performance
  • Full native support for OpenMP directive- and pragma-based SMP or multicore parallelization in PGFORTRAN, PGCC and PGC++
  • Auto-parallelization for the latest AMD and Intel multicore processors
  • Graphical parallel PGDBG debugger and PGPROF performance profiler for auto-parallel, thread-parallel, OpenMP and MPI programs
  • Pre-configured MPI message-passing libraries and utilities for Linux
  • Optimized BLAS and LAPACK math libraries for Linux
  • Comprehensive support for all major Linux distributions
  • Installation utilities to simplify the setup and management of your Linux cluster
  • PGI Roll option*.

MPI Support

On Linux, the OpenMP and MPI parallel PGDBG debugger and PGPROF performance profiler included with the PGI CDK support MPICH, MPICH2, HP-MPI and OpenMPI over Ethernet and MVAPICH over InfiniBand clusters. MPICH (including MPICH2) was developed at the Argonne National Laboratory. MPICH is an open source implementation of the Message-Passing Interface (MPI) standard. MPICH is a full implementation of MPI, so your existing MPI applications will port easily to your Linux cluster using the PGI CDK.

MVAPICH, the "MPI over InfiniBand, iWARP and RDMA-enabled Interconnects" project is led by Network-Based Computing Laboratory, Department of Computer Science and Engineering at the Ohio State University.

Request a 30 day trial of the PGI CDK by completing the PGI CDK Evaluation Request Form.

*About the PGI Roll—The PGI Roll is maintained and distributed by Stanford University. The PGI Roll contains software only. A valid PGI license is required to use the software. A valid PGI CDK license is required to enable remote MPI debugging and profiling.

Technical Features

A partial list of technical features supported includes the following:

  • PGFORTRAN™ native OpenMP and auto-parallel Fortran 95/03 compiler with CUDA extensions
  • PGF77® native OpenMP and auto-parallel FORTRAN 77 compiler
  • PGHPF® native data parallel compiler with full HPF language support (Linux only)
  • PGCC® OpenMP and auto-parallel ANSI and K&R C compiler
  • PGC++® OpenMP and auto-parallel C++ compiler
  • PGDBG® graphical Cluster MPI and OpenMP debugger
  • PGPROF® graphical cluster MPI and OpenMP performance profiler
  • Full support for the PGI Accelerator™ programming model on x64+GPU (PGFORTRAN and PGCC only)
  • Full 64-bit support on multicore AMD64 and Intel 64
  • Intel 64 and AMD Opteron optimizations including SSE4.2/AVX, SSE4a/ABM, prefetching, use of extended register sets, and 64-bit addressinG
  • PGI Unified Binary™ technology combines into a single executable or object file code optimized for multiple AMD64 processors, Intel 64 processors or NVIDIA GPUs.
  • Includes separate 64-bit x64 and 32-bit x86 development environments and compilers
  • Full support for Fortran 2003
  • Full support for ANSI C99
  • Full support for OpenMP 3.0 on up to 256 cores
  • Support for 64-bit integers (-r8/-i8 compilation flags)
  • Highly tuned Intel MMX and SSE intrinsics library routines (C/C++ only)
  • One pass interprocedural analysis (IPA)
  • Interprocedural optimization of libraries
  • Profile feedback optimization
  • Function inlining including library functions
  • Vectorization, loop interchange, loop splittinG
  • Memory hierarchy and memory allocation optimizations including huge pages support
  • Loop unrolling, loop fusion, and cache tilinG
  • Enhanced auto-parallelization of loops specifically optimized for multi-core processors
  • Concurrent subroutine call support
  • Extensive vectorization/optimization directives/pragmas support
  • State-of-the-art dependence analysis and global optimization
  • Invariant conditional removal
  • Tuning for non-uniform memory access (NUMA) architectures
  • Process/CPU affinity support in SMP/OpenMP applications
  • Tracking ANSI C++ Standard—EDG 4.1 C++ front-end
  • C++ Class member templates
  • C++ partial specialization and ordering
  • C++ explicit template qualification
  • C/C++ extended asm support
  • GNU style template instantiation
  • GNU linkonce support
  • Integrated cpp pre-processinG
  • Cray/DEC/IBM extensions (including Cray POINTERs & DEC STRUCTURES/UNIONS)
  • Support for SGI-compatible DOACROSS in PGF77 and PGFORTRAN
  • Threads-based auto-parallelization using Fortran
  • Threads-based auto-parallelization of FOR loops in C/C++
  • Full native OpenMP parallelization directives in Fortran
  • Full native OpenMP parallelization pragmas in C/C++
  • Byte swapping I/O for RISC/UNIX interoperability
  • Process/CPU affinity support in SMP/OpenMP applications
  • Full support for Common Compiler Feedback Format compiler optimization listings
  • User modules support simplifies switching between multiple compiler environments/versions
  • Includes optimized ACML (LAPACK/BLAS/FFT) math library supported on all targets
  • Supports multi-threaded execution with Intel Math Kernel Libraries (MKL) 10.1 and later
  • Optional PGI compiled IMSL Fortran numerical library available
  • Pre-validated de facto standard support libraries including NetCDF, F95 OpenGL, ATLAS, ScaLAPACK, FFTW, MPICH, MPICH2 and LAM MPI
  • Interoperable with TotalView* (Linux only) and Allinea DDT
  • Fully interoperable with gcc, g77, and gdb
  • Unconditional 30 day money back guarantee

System Requirements

  • Front-end Node: 64-bit x64 or 32-bit x86 processor-based workstation or server with one or more AMD or Intel microprocessors.
  • Cluster Nodes: 64-bit x64 or 32-bit x86 processor-based workstation or server with one or more AMD or Intel microprocessors.
    Accelerator (optional): NVIDIA CUDA-enabled GPU with compute capability 1.0 or later.
    Note: Heterogeneous systems that include both 32-bit and 64-bit processor-based workstations or servers are not supported.
  • Network: Standard TCP/IP network such as Ethernet, Fast Ethernet or Gigabit Ethernet; high-performance InfiniBand network. Preferred configuration is a dedicated private network interconnecting the cluster nodes, with the designated front-end node also networked to a general purpose network.
  • Operating System: Linux—On 32-bit x86 processor-based systems, the software must be co-installed with a version of the Linux operating system with kernel revision 2.4.18 or higher, On 64-bit processor-based systems, the software must be co-installed with 64-bit Linux with kernel revision 2.4.19 or higher.
  • Memory: Minimum 1 GB per cluster node. 2 GB recommended for front-end node.
  • Hard Disk: 800 MB on front-end node; 50 MB on each cluster node.
  • Other: Web browser and Adobe Acrobat Reader for viewing online documentation.
Click me