Distributed Recurrent Neural Network Learning Via Metropolis-Weights Consensus
NEURAL INFORMATION PROCESSING (ICONIP 2017), PT IV(2017)
摘要
When data are shared among arbitrarily connected machines, the training process became an interesting challenge where each node is initialized with a specific scalar value, so it present a problem of computing their average taking into account interconnectivity between agents in order to ensure that the objective process converges as the centralized counterpart, the decentralized average consensus (DAC) is the most popular strategy due to its low-complexity. In this paper a random topology is choosing to validate a network of agents with a given probability of interconnectivity between every pair of neighbors nodes, the global regularized least-square problem requires an optimization procedure to solve it with decentralized fashion then, the question is what is the optimal output weight vector that we have to choose for the test task, here the DAC intervenes to encourage all agents having the same vectors or we will be on the case of local training, so we must choose appropriately the DAC strategy in order that all agents converge to the same state. The contribution key is to apply the Metropolis-Weights as a strategy of average consensus to compute the mean of the updates of nodes at each step with several tests, this protocol demonstrate convergence of the consensus algorithm for network without packet losses. Experimental results on prediction and identification tasks show a favorable performance in terms of accuracy and efficiency.
更多查看译文
关键词
Distributed learning,Metropolis-Weights,Recurrent neural network,Alternating direction method of multipliers
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络