Distributed mixture-of-experts for Big Data using PETUUM framework

2017 36th International Conference of the Chilean Computer Science Society (SCCC)(2017)

引用 0|浏览13
暂无评分
摘要
Today, organizations are beginning to realize the importance of using as much data as possible for decision-making in their strategy. The finding of relevant patterns in enormous amount of data requires automatic machine learning algorithms, among them, a popular option is the mixture-of-experts that allows to model data using a set of local experts. The problem of using typical learning algorithms over Big Data is the handling of these large datasets in primary memory. In this paper, we propose a methodology to learn a mixture-of-experts in a distributed way using PETUUM platform. Particularly, we propose to learn the parameters of mixture-of-experts by adapting the standard stochastic gradient descent in a distributed way. This methodology is applied to people detection with standard real datasets considering accuracy and precision metrics among other. The results show a consistent performance of mixture-of-experts models where the best number of experts varies according to the particular dataset. We also evidence the advantages of the distributed approach by showing the almost linear decreasing of average training time according to the number of processors. In a future work, we expect to apply this methodology to mixture-of-experts with embedded variable selection.
更多
查看译文
关键词
distributed mixture-of-experts,Big Data,automatic machine learning algorithms,local experts,PETUUM framework,standard stochastic gradient descent,precision metrics,embedded variable selection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要