On Gradient Descent for On-Chip Learning

John Sum, Janet C.C. Chang

2021 International Conference on Technologies and Applications of Artificial Intelligence (TAAI)(2021)

引用 0|浏览10
暂无评分
摘要
Recently, it has been shown that gradient descent learning (GDL) might have problem in training a neural network (NN) with persistent weight noise. In the presence of multiplicative weight noise (resp. node noise), the model generated by GDL is not the desired model which minimizes the expected mean-squared-error (MSE) subjected to multiplicative weight noise (resp. node noise). In this paper, the analysis is formalized under a conceptual framework called suitability and extended to the learning gradient descent with momentum (GDM). A learning algorithm is suitable to be implemented on-chip to train a NN with weight noise if its learning objective is identical to the expected MSE of the NN with the same noise. In this regard, it is shown that GDL and GDM are not suitable to be implemented on-chip. Theoretical analysis in support with experimental evidences are presented for the claims.
更多
查看译文
关键词
Analogue Neural Network,Gradient Descent,Gradient Descent with Momentum,On-Chip Learning,Suitability,Weight Noise
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要