High Dynamic Range Digital Neuron Core With Time-Embedded Floating-Point Arithmetic

IEEE Transactions on Circuits and Systems I: Regular Papers(2023)

Cited 0|Views22
No score
Abstract
Recently, many large-scale neuromorphic systems that emulate spiking neural networks have been presented. Biological evidence emphasizes the importance of the log-normal distribution of biological neural and synaptic parameters in the brain; however, this fact is easily ignored sometimes, and the parameters are excessively optimized to scale up a system. This is because high-precision parameters require floating-point arithmetic–an operation known to consume high-energy and result in a high implementation cost in digital hardware. In this study, we propose a novel neuron implementation model that enhances neural and synaptic dynamics using the time-embedded floating-point arithmetic for better biological plausibility and low-power consumption. The proposed algorithm enables sharing temporal information with a membrane potential by time-embedded floating-point arithmetic, thus minimizing the memory usage of the neural state. In addition, this method need not access the static random-access memory at every time step, thus reducing the dynamic power consumption, even with a floating-point precision neural and synaptic dynamics. Using the proposed model, we implemented a core group with a total of 8,192 neurons on a field-programmable gate array device, Xilinx XC7K160T. The core group is designed for use in large-scale neuromorphic systems. We tested the neuron model in a core under various experimental conditions.
More
Translated text
Key words
Floating-point synapse,neuromorphic processor,spiking neural network,time-embedded floating-point
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined