Communication Optimization Schemes For Accelerating Distributed Deep Learning Systems

Jaehwan Lee,Hyeonseong Choi, Hyeonwoo Jeong, Baekhyeon Noh,Ji Sun Shin

APPLIED SCIENCES-BASEL(2020)

引用 0|浏览1
暂无评分
摘要
In a distributed deep learning system, a parameter server and workers must communicate to exchange gradients and parameters, and the communication cost increases as the number of workers increases. This paper presents a communication data optimization scheme to mitigate the decrease in throughput due to communication performance bottlenecks in distributed deep learning. To optimize communication, we propose two methods. The first is a layer dropping scheme to reduce communication data. The layer dropping scheme we propose compares the representative values of each hidden layer with a threshold value. Furthermore, to guarantee the training accuracy, we store the gradients that are not transmitted to the parameter server in the worker's local cache. When the value of gradients stored in the worker's local cache is greater than the threshold, the gradients stored in the worker's local cache are transmitted to the parameter server. The second is an efficient threshold selection method. Our threshold selection method computes the threshold by replacing the gradients with the L1 norm of each hidden layer. Our data optimization scheme reduces the communication time by about 81% and the total training time by about 70% in a 56 Gbit network environment.
更多
查看译文
关键词
distributed deep learning, multi-GPU, data parallelism, communication optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要