OpenMP vs. MPI on a shared memory multiprocessor

PARALLEL COMPUTING: SOFTWARE TECHNOLOGY, ALGORITHMS, ARCHITECTURES AND APPLICATIONS(2003)

Cited 1|Views8
No score
Abstract
Porting a parallel application from a distributed memory system to a shared memory multiprocessor can be done by reusing the existing MPI code or by parallelising the serial version with OpenMP directives. These two routes are compared for the case of the climate model ECHAM5 on a IBM pSeries690 system. It is shown, that in both cases modifications of computation and communication patterns are needed for high parallelisation efficiency. The best OpenMP version is superior to the best MPI version, and has the further advantage of allowing a very efficient load-balancing.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined