谷歌Chrome浏览器插件
订阅小程序
在清言上使用

Investigating Random Variations of the Forward-Forward Algorithm for Training Neural Networks

IJCNN(2023)

引用 0|浏览3
暂无评分
摘要
The Forward-forward (FF) algorithm is a new method for training neural networks, proposed as an alternative to the traditional Backpropagation (BP) algorithm by Hinton. The FF algorithm replaces the backward computations in the learning process with another forward pass. Each layer has an objective function, which aims to be high for positive data and low for negative ones. This paper presents a preliminary investigation into variations of the FF algorithm, such as incorporating a local Backpropagation to create a hybrid network that robustly converges while preserving the ability to avoid backward computations when needed, for example, in non-differentiable areas of the network. Additionally, a pseudo-random logic for selecting trainable stacks of layers at each epoch is proposed to speed up the learning process.
更多
查看译文
关键词
Backpropagation,Forward-Forward algorithm,Learning Procedures,Deep Neural Networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要