Chrome Extension
WeChat Mini Program
Use on ChatGLM

Boosting the Training Time of Weakly Coordinated Distributed Machine Learning.

IEEE BigData(2021)

Cited 1|Views13
No score
Abstract
In this paper, we propose a novel communication-efficient algorithm for distributed matrix factorisation. Our goal is to find a good trade-off between the communication overhead and the overall model training time. In our setting, the training data is distributed across multiple servers that aim to learn a joint machine learning model. In contrast to standard distributed computation, due to privacy concerns, the participating servers are not allowed to share raw data, however, sharing of the non-personal model parameters is allowed. We investigate the drawbacks of traditional strongly coordinated distributed techniques and compare them to weakly coordinated gossip approaches. The advantage of strongly coordinated approaches is that the learning process closely mimics that of a centralised algorithm and hence this approach can keep the overall training time at a minimum. However, this is at the expense of a large communication footprint of the algorithm. On the other hand, the weakly coordinated gossip approach offers a communication efficient solution that can take a large amount of training time to reach a good accuracy. As a solution, we develop a hybrid approach combining the above two approaches. We apply the hybrid approach on a latent factor model solving a top-N recommendation problem and we show that the hybrid approach achieves good accuracy in relatively short training time with minimal communication overhead particularly on very sparse data.
More
Translated text
Key words
Recommender Systems,Distributed Learning,Decentralised Matrix Factorisation,Matrix Factorisation,Communication Efficiency
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined