Prototype Analysis in Hopfield Networks with Hebbian Learning
arxiv(2024)
Abstract
We discuss prototype formation in the Hopfield network. Typically, Hebbian
learning with highly correlated states leads to degraded memory performance. We
show this type of learning can lead to prototype formation, where unlearned
states emerge as representatives of large correlated subsets of states,
alleviating capacity woes. This process has similarities to prototype learning
in human cognition. We provide a substantial literature review of prototype
learning in associative memories, covering contributions from psychology,
statistical physics, and computer science. We analyze prototype formation from
a theoretical perspective and derive a stability condition for these states
based on the number of examples of the prototype presented for learning, the
noise in those examples, and the number of non-example states presented. The
stability condition is used to construct a probability of stability for a
prototype state as the factors of stability change. We also note similarities
to traditional network analysis, allowing us to find a prototype capacity. We
corroborate these expectations of prototype formation with experiments using a
simple Hopfield network with standard Hebbian learning. We extend our
experiments to a Hopfield network trained on data with multiple prototypes and
find the network is capable of stabilizing multiple prototypes concurrently. We
measure the basins of attraction of the multiple prototype states, finding
attractor strength grows with the number of examples and the agreement of
examples. We link the stability and dominance of prototype states to the energy
profile of these states, particularly when comparing the profile shape to
target states or other spurious states.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined