FTPipeHD: A Fault-Tolerant Pipeline-Parallel Distributed Training Approach for Heterogeneous Edge Devices

Yuhao Chen,Qianqian Yang, Shibo He,Zhiguo Shi, Jiming Chen,Mohsen Guizani

IEEE TRANSACTIONS ON MOBILE COMPUTING(2024)

引用 1|浏览0
暂无评分
摘要
With the increasing proliferation of Internet-of-Things (IoT) devices, there is a growing trend towards distributing the power of deep learning (DL) among edge devices rather than centralizing it at the cloud. To deploy deep and complex models at edge devices with limited resources, model partitioning of deep neural network (DNN) models has been widely studied. However, most of the existing literature only considers distributing the inference model while still training the model at the cloud. In this paper, we propose FTPipeHD, a novel DNN training approach that trains DNN models across distributed heterogeneous devices with the fault-tolerance mechanism. To accelerate the training with the time-varying computing power of each device, we optimize the partition points dynamically according to real-time computing capacities. We also propose a novel weight redistribution approach that replicates the weights to both the neighboring nodes and the central node periodically, which combats the failure of multiple devices during training while incurring limited communication costs. Our numerical results demonstrate that FTPipeHD is 6.8 times faster in training than the state-of-the-art method when the computing capacity of the best device is 10 times greater than the worst one. It is also shown that the proposed method is able to accelerate the training even with the existence of device failures.
更多
查看译文
关键词
Training,Computational modeling,Servers,Data models,Load modeling,Fault tolerant systems,Fault tolerance,Distributed training,edge training,fault tolerance
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要