Chrome Extension
WeChat Mini Program
Use on ChatGLM

Integrating Asynchronous Task Parallelism with MPI

Chatterjee, S., Tas&#x;rlar, S.,Budimlic, Z., Cave, V.

Parallel & Distributed Processing(2013)

Cited 112|Views3
No score
Abstract
Effective combination of inter-node and intra-node parallelism is recognized to be a major challenge for future extreme-scale systems. Many researchers have demonstrated the potential benefits of combining both levels of parallelism, including increased communication-computation overlap, improved memory utilization, and effective use of accelerators. However, current “hybrid programming” approaches often require significant rewrites of application code and assume a high level of programmer expertise. Dynamic task parallelism has been widely regarded as a programming model that combines the best of performance and programmability for shared-memory programs. For distributed-memory programs, most users rely on efficient implementations of MPI. In this paper, we propose HCMPI (Habanero-C MPI), an integration of the Habanero-C dynamic task-parallel programming model with the widely used MPI message-passing interface. All MPI calls are treated as asynchronous tasks in this model, thereby enabling unified handling of messages and tasking constructs. For programmers unfamiliar with MPI, we introduce distributed data-driven futures (DDDFs), a new data-flow programming model that seamlessly integrates intra-node and inter-node data-flow parallelism without requiring any knowledge of MPI. Our novel runtime design for HCMPI and DDDFs uses a combination of dedicated communication and computation specific worker threads. We evaluate our approach on a set of micro-benchmarks as well as larger applications and demonstrate better scalability compared to the most efficient MPI implementations, while offering a unified programming model to integrate asynchronous task parallelism with distributed-memory parallelism.
More
Translated text
Key words
data flow computing,distributed shared memory systems,message passing,parallel programming,DDDF,HCMPI,Habanero-C MPI,Habanero-C dynamic task-parallel programming model,MPI calls,MPI message-passing interface,asynchronous task parallelism,communication-specific worker threads,computation-specific worker threads,construct tasking,data-flow programming model,distributed data-driven futures,distributed-memory parallelism,distributed-memory programs,dynamic task parallelism,extreme-scale systems,internode data-flow parallelism,intranode data-flow parallelism,message handling,runtime design,shared-memory programs,unified programming model,MPI,asynchronous task parallelism,data flow,data-driven tasks,phasers
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined