Forward Learning with Top-Down Feedback: Empirical and Analytical Characterization
arxiv(2023)
摘要
"Forward-only" algorithms, which train neural networks while avoiding a
backward pass, have recently gained attention as a way of solving the
biologically unrealistic aspects of backpropagation. Here, we first address
compelling challenges related to the "forward-only" rules, which include
reducing the performance gap with backpropagation and providing an analytical
understanding of their dynamics. To this end, we show that the forward-only
algorithm with top-down feedback is well-approximated by an
"adaptive-feedback-alignment" algorithm, and we analytically track its
performance during learning in a prototype high-dimensional setting. Then, we
compare different versions of forward-only algorithms, focusing on the
Forward-Forward and PEPITA frameworks, and we show that they share the same
learning principles. Overall, our work unveils the connections between three
key neuro-inspired learning rules, providing a link between "forward-only"
algorithms, i.e., Forward-Forward and PEPITA, and an approximation of
backpropagation, i.e., Feedback Alignment.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要