Representation Learning in a Decomposed Encoder Design for Bio-inspired Hebbian Learning
CoRR(2023)
摘要
Modern data-driven machine learning system designs exploit inductive biases
on architectural structure, invariance and equivariance requirements, task
specific loss functions, and computational optimization tools. Previous works
have illustrated that inductive bias in the early layers of the encoder in the
form of human specified quasi-invariant filters can serve as a powerful
inductive bias to attain better robustness and transparency in learned
classifiers. This paper explores this further in the context of representation
learning with local plasticity rules i.e. bio-inspired Hebbian learning . We
propose a modular framework trained with a bio-inspired variant of contrastive
predictive coding (Hinge CLAPP Loss). Our framework is composed of parallel
encoders each leveraging a different invariant visual descriptor as an
inductive bias. We evaluate the representation learning capacity of our system
in a classification scenario on image data of various difficulties (GTSRB,
STL10, CODEBRIM) as well as video data (UCF101). Our findings indicate that
this form of inductive bias can be beneficial in closing the gap between models
with local plasticity rules and backpropagation models as well as learning more
robust representations in general.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要