Memory Abstraction and Optimization for Distributed Executors

2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC)(2020)

引用 0|浏览3
暂无评分
摘要
This paper presents a suite of memory abstraction and optimization techniques for distributed executors, with the focus on showing the performance optimization opportunities for Spark executors, which are known to outperform Hadoop MapReduce executors by leveraging Resilient Distributed Datasets (RDDs), a fundamental core of Spark. This paper makes three original contributions. First, we show that applications on Spark experience large performance deterioration, when RDD is too large to fit in memory, causing unbalanced memory utilizations and premature spilling. Second, we develop a suite of techniques to guide the configuration of RDDs in Spark executors, aiming to optimize the performance of iterative ML workloads on Spark executors when their allocated memory is sufficient for RDD caching. Third, we design DAHI, a light-weight RDD optimizer. DAHI provides three enhancements to Spark: (i) using elastic executors, instead of fixed size JVM executors; (ii) supporting coarser grained tasks and large size RDDs by enabling partial RDD caching; and (iii) automatically leveraging remote memory for secondary RDD caching in the shortage of primary RDD caching on a local node. Extensive experiments on machine learning and graph processing benchmarks show that with DAHI, the performance of ML workloads and applications on Spark improves by up to 12.4x.
更多
查看译文
关键词
MapReduce,Distributed Systems,Memory Management,Apache Spark
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要