Inverse-GMM: A Latency Distribution Shaping Method for Industrial Cooperative Deep Learning Systems

IEEE Journal on Selected Areas in Communications(2023)

引用 2|浏览38
暂无评分
摘要
The front deployed deep learning is a promising technology of the next generation industrial applications, which can extract essential information from high dimension sensors. However, part of these heavy computation tasks at resource constrained front devices have to be offloaded to the edge or cloud devices, which forms the cooperative deep learning system through the exchange of intermediate data. The inference efficiency of cooperative deep learning system will then be highly correlated with the communication latency caused by the non-stationary industrial multipath-rich fading channel. This paper proposes a novel method to control the distribution of communications latency, which is able to support efficient cooperative deep learning architecture in the harsh industrial environment. The proposed method is essentially an inverse process of Gaussian Mixture Model (GMM), which adjusts latency samples to approach the given arbitrary shape function. To achieve this objective, a new variation of Expectation-Maximization (EM) algorithm in analytical domain is derived to decompose arbitrary distribution shape with multiple Gaussian kernels and an optimized stochastic resource allocation algorithm is proposed to approximate each Gaussian kernels. The performance of proposed method is verified by both classical Rician channel model and field measured industrial fading channel responses.
更多
查看译文
关键词
Cooperative deep learning,distribution shaping,GMM,EM,resource allocation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要