HiPERiSM's Course HC5 

HiPERiSM - High Performance Algorism Consulting

Course HC5: Porting Vector Code to OpenMP

image2.gif (13629 bytes)



This course is intended for experienced Fortran and ANSI C programmers who have used vector mainframes and have production code on such systems and need to move it to an SMP parallel implementation. Knowledge of programming vector Supercomputers is an advantage but no prior knowledge of programming (non-vector) parallel computers is assumed.


This training course primarily intends to introduce OpenMP to programmers who have no prior experience in (non-vector) parallel computing. As a secondary objective, the target audience also includes those with a background in vector processing systems with a requirement to port vector code to the OpenMP paradigm and learn how to use it. It is anticipated that this approach complements other approaches (1-4) to introducing the OpenMP Language Standard (5) to applications developers. The course teaches participants how to write parallel fortran code using the OpenMP programming paradigm and also how to convert vector constructs into equivalent OpenMP form. The implementations include Shared Memory Parallel (SMP) platforms from workstations to large SMP high performance computers. Special attention is devoted to issues related to porting legacy code to SMP OpenMP implementations.


3 days organized as follows:

(HC2 and HC5 refer to training work books for the respective courses):

Day Period Chapter Topic








Porting legacy code to parallel computers

Vector and SMP Processors

Programming vector computers

Parallel programming concepts

The OpenMP paradigm of parallel programming





OpenMP language specification

Examples and Exercises





Code preprocessors and OpenMP

Quick start with Intel Thread Checker for OpenMP






Small case studies

Using OpenMP

The Future of OpenMP




Case Study of the Princeton Ocean Model and open tutorial on user code


The course is contained in two course workbooks intended for use in one of three ways:

  1. Class room presentation,
  2. Self-paced study,
  3. As a reference.

For options (a) and (b) the course workbooks are accompanied by a syllabus.

Some fundamental design principles in developing the course material and work books are:

  1. Orderly build-up of knowledge of parallel language paradigms and hardware before entering into the details of OpenMP.
  2. Separation of the description of how to use the OpenMP language from explanation of parallel work scheduling, data dependencies, recurrences, and memory models.
  3. Separation of the discussion of OpenMP directives and clauses and the itemization of the directives and clauses in simple comprehensible formats.
  4. Providing examples and case studies that can be immediately compiled and executed on an OpenMP host system, and also compared to MPI equivalents.

The workbooks include all source code, sample input, output, and make files needed to compile and execute all programs discussed in the text.

 Review of Sections:

This training workbook is arranged into nine chapters described as follows.

  1. Porting Legacy Code to Parallel Computers. This chapter reviews developer perceptions of parallel programming, considerations for legacy codes and how to look for parallelism in them. Also covered are guidelines for porting to SMP computers, typical parallel performance problems and some lessons learned in SMP parallelization.
  2. Vector and SMP Processors. This chapter reviews some basics by describing vector architectures.
  3. Programming Vector Computers. This chapter reviews some elements of vector code performance for workloads.
  4. Code Preprocessors and OpenMP. This chapter gives an overview of preprocessors, autoparallelizers, and OpenMP design features.
  5. Quick Start With Intel Thread Checker for OpenMP. This chapter provides a simple and short introduction to using the components of the Intel Thread Checker™ .
  6. Examples and Exercises. This chapter discusses several examples and compares vector and OpenMP versions. These include the Monte Carlo method for multi-dimensional integrals.
  7. Small Case Studies. This chapter discusses several case studies including studies of banded matrix solvers, finite difference methods for the two dimensional diffusion equation, and the Stommel Ocean model.
  8. Case Study of the Princeton Ocean Model. This chapter is a step-by-step tutorial comparing the Cray SV1 vector version of the POM and its conversion to an OpenMP equivalent.
  9. Bibliography. This includes a list of citations on High Performance Computing and parallel language programming.

For this course the work book for course HC2 is also required as a reference and a summary of contents is given in HC2.


1) L. Dagum and R. Menon, OpenMP: An Industry Standard API for Shared-Memory Programming, IEEE Computational Science and Engineering, January-March, 1998, pp 46-55.

2) G. Delic, R. Kuhn, W. Magro, H. Scott, and R. Eigenmann, Minisymposium on OpenMP - A New Portable Paradigm of Parallel Computing: Features, Performance, and Applications, Fifth SIAM conference on Mathematical and Computational Issues in the Geosciences, San Antonio, TX, March 24-27, 1999. (http://www.hiperism.com).

3) C. Koelbel, Short Course on OpenMP in Practice, Fifth SIAM conference on Mathematical and Computational Issues in the Geosciences, San Antonio, TX, March 24-27, 1999.

4) T. Mattson and R. Eigenmann, Tutorial on OpenMP Programmming with OpenMP, SuperComputing SC99, Portland, OR, 15 November, 1999.

5) OpenMP Fortran Application Program Interface, Version 1.1 (November, 1999), http://www.openmp.org.

6) S. Brawer, Introduction to Parallel Programming, Academic Press, Inc., Boston, MA, 1989.

7) W. Gropp, E Lusk, A. Skjellum, Using MPI, Portable Parallel Programming with the Message-Passing Interface, The MIT Press, Cambridge, MA, 1996

8) Peter S. Pacheco, Parallel Programming with MPI, Morgan Kaufman Publishers, Inc., San Francisco, CA, 1997.

backnext page

HiPERiSM Consulting, LLC, (919) 484-9803 (Voice)

(919) 806-2813 (Facsimile)