AdaBlock: SGD with Practical Block Diagonal Matrix Adaptation for Deep Learning

INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151(2022)

引用 2|浏览16
暂无评分
摘要
We introduce ADABLOCK, a class of adaptive gradient methods that extends popular approaches such as ADAM by adopting the simple and natural idea of using block-diagonal matrix adaption to effectively utilize structural characteristics of deep learning architectures. Unlike other quadratic or blockdiagonal approaches, ADABLOCK has complete freedom to select block-diagonal groups, providing a wider trade-off applicable even to extremely high-dimensional problems. We provide convergence and generalization error bounds for ADABLOCK, and study both theoretically and empirically the impact of the block size on the bounds and advantages over usual diagonal approaches. In addition, we propose a randomized layer-wise variant of ADABLOCK to further reduce computations and memory footprint, and devise an efficient spectrum-clipping scheme for ADABLOCK to benefit from SGD's superior generalization performance. Extensive experiments on several deep learning tasks demonstrate the benefits of block diagonal adaptation compared to adaptive diagonal methods, vanilla SGD, as well as modified versions of full-matrix adaptation.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要