Adam With Bandit Sampling For Deep Learning

ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020)(2020)

引用 24|浏览87
暂无评分
摘要
Adam is a widely used optimization method for training deep learning models. It computes individual adaptive learning rates for different parameters. In this paper, we propose a generalization of Adam, called ADAMBS, that allows us to also adapt to different training examples based on their importance in the model's convergence. To achieve this, we maintain a distribution over all examples, selecting a mini-batch in each iteration by sampling according to this distribution, which we update using a multi-armed bandit algorithm. This ensures that examples that are more beneficial to the model training are sampled with higher probabilities. We theoretically show that ADAMBS improves the convergence rate of Adam-O(root log n/T) instead of O(root n/T) in some cases. Experiments on various models and datasets demonstrate ADAMBS's fast convergence in practice.
更多
查看译文
关键词
bandit sampling,deep learning,adam
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要