谷歌Chrome浏览器插件
订阅小程序
在清言上使用

Modeling and Optimizing the Scaling Performance in Distributed Deep Learning Training

International World Wide Web Conference(2022)

引用 2|浏览53
暂无评分
摘要
ABSTRACT Distributed Deep Learning (DDL) is widely used to accelerate deep neural network training for various Web applications. In each iteration of DDL training, each worker synchronizes neural network gradients with other workers. This introduces communication overhead and degrades the scaling performance. In this paper, we propose a recursive model, OSF (Scaling Factor considering Overlap), for estimating the scaling performance of DDL training of neural network models, given the settings of the DDL system. OSF captures two main characteristics of DDL training: the overlap between computation and communication, and the tensor fusion for batching updates. Measurements on a real-world DDL system show that OSF obtains a low estimation error (ranging from 0.5% to 8.4% for different models). Using OSF, we identify the factors that degrade the scaling performance, and propose solutions to effectively mitigate their impacts. Specifically, the proposed adaptive tensor fusion improves the scaling performance by 32.2%∼ 150% compared to the constant tensor fusion buffer size.
更多
查看译文
关键词
distributed deep learning, scaling performance, performance modeling, tensor fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要