Modeling Interprocessor Communication and Performance Scalability for Distributed Deep Learning Systems

2019 International Conference on High Performance Computing & Simulation (HPCS)(2019)

引用 0|浏览6
暂无评分
摘要
While deep learning applications become popular, the design of deep learning systems is a critical task to unleash the computing power of underlying systems. Aside from the computing hardware, the computer networking is also a key factor that affects the delivered performance. When considering a large and complex model, the scalability of the system highly depends on the design of the networks, as well as the software behaviors. In this paper, we propose a profile-data-guided performance prediction method to estimate the performance of the system with desired high-speed interconnects, based on the profiling data obtained in a previous run. In particular, we leverage the open-source profiling tool, SOFA, for characterizing the software activities of deep learning software running in a computer cluster, and the characterized information is used to build the performance model for the model training process. When estimating the performance, SOFA is used to capture the performance critical factors for the model to make the predictions. To evaluate the proposed method, four popular deep learning models are adopted in our experiments, ResNet50, Inception3, Alexnet, and VGG16, where a computer cluster formed by four nodes is used to profile the training of the above models on TensorFlow. We ran the scalability analysis to analyze the size of the cluster, and the suitable computer networks for the models. By comparing the predicted data and those measured on the cluster, our model achieves up to 95% accuracy in most of the cases, with the maximum error rate of 10%.
更多
查看译文
关键词
Profiling tool,Deep learning,Distributed training,Timing Model,Network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要