Distributed Representations Enable Robust Multi-Timescale Symbolic Computation in Neuromorphic Hardware
arxiv(2024)
Abstract
Programming recurrent spiking neural networks (RSNNs) to robustly perform
multi-timescale computation remains a difficult challenge. To address this, we
describe a single-shot weight learning scheme to embed robust multi-timescale
dynamics into attractor-based RSNNs, by exploiting the properties of
high-dimensional distributed representations. We embed finite state machines
into the RSNN dynamics by superimposing a symmetric autoassociative weight
matrix and asymmetric transition terms, which are each formed by the vector
binding of an input and heteroassociative outer-products between states. Our
approach is validated through simulations with highly non-ideal weights; an
experimental closed-loop memristive hardware setup; and on Loihi 2, where it
scales seamlessly to large state machines. This work introduces a scalable
approach to embed robust symbolic computation through recurrent dynamics into
neuromorphic hardware, without requiring parameter fine-tuning or significant
platform-specific optimisation. Moreover, it demonstrates that distributed
symbolic representations serve as a highly capable representation-invariant
language for cognitive algorithms in neuromorphic hardware.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined