Meta neurons improve spiking neural networks for efficient spatio-temporal learning.

Neurocomputing(2023)

Cited 1|Views27
No score
Abstract
Spiking neural networks (SNNs) have incorporated many biologically-plausible structures and learning principles, and hence are playing critical roles in bridging the gap between artificial and natural neural networks. The spike is a sparse membrane-potential signal describing the above-threshold event-based firing and under-threshold dynamic integration, which might be considered an alternative uniformed and efficient way of spatio-temporal information representation and computation. Nowadays, most SNNs have selected the leaky integrated-and-fire (LIF) neuron with 1st-order dynamics as a key feature of membrane potential integration. The LIF neuron is efficient in dynamic coding but still too simple compared to its biological counterpart, which could generate various types of firing patterns. Here we run further by defining some “meta” neuron models that contain 1st- or 2nd-order dynamics and a recovery variable to simulate the hyperpolarization. Both shallow and deep SNNs were used to test the efficiency and flexibility of meta neuron models in various benchmark machine learning tasks, containing spatial learning (e.g., MNIST, Fashion-MNIST, NETtalk, Cifar-10), temporal learning (e.g., TIDigits, TIMIT), and spatio-temporal learning (e.g., N-MNIST). SNNs using these meta neurons were optimized by backpropagation with approximate gradient, and achieved markedly higher spatio-temporal capability without affecting accuracy, compared to those using regular LIF models.
More
Translated text
Key words
Spiking neural network,Meta neuron,Biologically-plausible computing,Neuronal dynamics,Gradient approximation
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined