PGI Guide to Molpro

This guide is intended to help PGI customers install, build, and run a complete system of ab initio programs for molecular electronic structure calculations (MOLPRO) from source files using PGI 7.1 compilers on a 64-bit Linux system.

Version Information
  This guide was created for MOLPRO parallel version 2006.1 when building from the source files using PGI Release 7.1 pgf90 on a AMD™ AMD64/Opteron or Intel® Xeon with EM64T system running 64-bit Linux. This guide does not cover the non-relocatable pre-built binaries version of MOLPRO.
Application Notes
  Information about MOLPRO can be found at the MOLPRO home page.
From the MOLPRO Home page:

"MOLPRO is a complete system of ab initio programs for molecular electronic structure calculations, designed and maintained by H.-J. Werner and P. J. Knowles, and containing contributions from a number of other authors. As distinct from other commonly used quantum chemistry packages, the emphasis is on highly accurate computations, with extensive treatment of the electron correlation problem through the multiconfiguration-reference CI, coupled cluster and associated methods. Using recently developed integral-direct local electron correlation methods, which significantly reduce the increase of the computational cost with molecular size, accurate ab initio calculations can be performed for much larger molecules than with most other programs."

Obtaining the Source Code and License
  MOLPRO source code, binaries and documentation are copyrighted materials and may only be distributed under licence. A valid licence key from MOLPRO is required before you may use the MOLPRO program. To obtain the MOLPRO distribution materials, please visit the MOLPRO home page or contact molpro@molpro.net.
Prerequisites
 
  • A Fortran 90 and GNU C compiler.
  • Approximately 200Mb free disk space for use during compilation.
  • At least one large scratch file system for the program I/O.
  • GNU make
  • Global Arrays Toolkit
Configuration and Set-up Information
 
  1. Ensure that pgf90 is in your shell's PATH.
  2. Unzip and untar Global Arrays Toolkit. Assume you have MPI install in /usr/pgi/linux86-64/7.1 directory. Run following command to build GA Arrays for MPI use:
    gmake TARGET=LINUX64 MPI_INCLUDE=/usr/pgi/linux86-64/7.1/include \ 
    MPI_LIB=/usr/pgi/linux86-64/7.1/lib LIBMPI=-lmpich FC=pgf90 \
    CC=pgcc COPT=-fast FOPT=-fast USE_MPI=yes
    You may need different configuration for tcgmsg or myrinet.
  3. Unzip and untar the MOLPRO package in your build directory.
  4. cd into the new molpro2006.1 directory
  5. Run the following command to configure:
    ./configure -mpp -pgf90 -i8
    This will create the CONFIG file. Note you may need to enter information during this configuration such as whether large files are supported, the location of BLAS library, where the MOLPRO executable and its library will be installed, and your MOLPRO license key. Example of the configuration:
    mymachine% ./configure -mpp -pgf90 -i8
    
    Building program for processor type=x86_64
    Program may not run on other platforms
    You can choose another processor type using the [p3|p4|athlon|x86_64|ia64] optio
    n
    
    Program will be built for parallel execution
    found /usr/local/bin/perl version 5.008001
    Machine architecture is unix unix-i8 unix-linux unix-linux-x86_64
    Additional compilation pre-processor flags: mpp eaf
    
    File size greater than 2GB allowed
    
    pgf90 7.1-2 64-bit target on x86-64 Linux -tp k8-64
    searching for blas libraries...
    found blaslib   -L/usr/pgi_rel/linux86-64/7.1-2/lib -lacml_mv -lacml
    
    blaslibold=
    plese select blas library from above list
    (default -L/usr/pgi_rel/linux86-64/7.1-2/lib -lacml_mv -lacml):
    
    acml library contains full lapack implementation
    
    Do wish to use tcgmsg, mpi, or myrinet? [tcgmsg]
    mpi
    Please give the name of the directory containing GA, MA libraries:
    (default ): 
    /local/home/ga-4-0-8/lib/LINUX64
    If you want to use a special wrapper for parallel job startup
    please specify. (default /usr/pgi_rel/linux86-64/7.1/mpi/mpich/bin/mpirun): 
    
    Please give both the -L and -l loader options needed to access the MPI library
    Leave blank if you want to use the vendor supplied library on Cray
    or IBM SP, etc
    (default -L/usr/lib -lmpi):
    -L/usr/pgi/linux86-64/7.1/mpi/mpich/lib -lmpich
    MPI implementation is mpich
    Enter max number of atoms [ 200]: 
    Enter max number of valence orbitals [ 300]: 
    Enter max number of basis functions [2000]: 
    Enter max number of states per symmetry [  20]: 
    Enter max number of state symmetries [  16]: 
    
    If you wish to use a system BLAS library, please give the maximum
    level (0, 1, 2, 3, or 4) of BLAS to be supplied by the system
    If the system blas only supports 32 bit integers, only 0 or 4 can be used
    4 is recommended for IBM, SGI-IRIX, SUN, FUJITSU
    (default 3): 
    
    Installed binaries directory [/usr/local/bin]
    
    Installed auxiliary directory [/usr/local/lib/molpro-mpp-Linux-x86_64-i8-2006.1]
    
    Installed HTML documentation directory [/home/public_html/molpro/molpro2
    006.1]
    
    Installed CGI script directory [/home/public_html/molpro/molpro2006.1]
    
    Found glob.h
    Using getopt_long from system library
    parse-x86_64-unknown-linux-gnu-i8.o.gz is your object.
    
    CONFIG file created; proceed to compilation
    
  6. Check the CONFIG file to make sure the information is as expected. You may edit the CONFIG file directly to fix any errors. Note that -i8 should be present in the 64-bit compiler options. We recommend using "-fast" for best performance. We've included an example CONFIG file for use with 64-bit pgf90.
Building MOLPRO
  Once the configuration is set and CONFIG file is created.

In MOLPRO base directory, type:
make
-j can be used to speed up compilation time for multiple CPUs system.
Running MOLPRO
  Tuning MOLPRO

MOLPRO can be tuned for a particular system by running the following command from the base MOLPRO directory. The tuning determines which version of a routine, blas or MOLPRO, gives the best performance and appends the tuning results to "bin/molpro*mpi.rc".

./bin/molpro -n 2 mpptune.com
Running the test cases

Example of running with 2 nodes.

Run a small subset of tests:

make MOLPRO_OPTIONS="-n 2" -C testjobs quicktest

Run most tests except some long ones:

make MOLPRO_OPTIONS="-n 2" -C testjobs test

Run all tests:

make MOLPRO_OPTIONS="-n 2" -C testjobs bigtest

These commands can also be issued in the MOLPRO base directory.

Running the benchmark

In directory bench, type:

./configure
You only need to do this once.

To run benchmark, i,e., small3.com with 2-way parallel, type:

runbench 2 1 small3.com
Each job appends the timings to bench.tab. Timings are also listed in a table at the end of each output.
Verifying Correctness
  MOLPRO is a "self-check" program since a program will only run to completion if no errors occur. For test cases, if the test runs without errors, an *.out file will be created in directory testjobs . If an error does occur, an *.errout file will be created. For benchmarks, if a program runs without errors, timings should be listed in each output as well as appended at the end of bench.tab file.
Known Issues and Limitations
  No known issues.
Click me