Vesti: An In-Memory Computing Processor For Deep Neural Networks Acceleration

CONFERENCE RECORD OF THE 2019 FIFTY-THIRD ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS(2019)

Cited 2|Views81
No score
Abstract
We present Vesti, a Deep Neural Network (DNN) accelerator optimized for energy-constrained hardware platforms such as mobile, wearable, and Internet of Things (IoT) devices. Vesti integrates instances of in-memory computing (IMC) SRAM macros with an ensemble of peripheral digital circuits for dataflow management. The IMC SRAM macros eliminate the data access bottleneck that hinders conventional ASIC implementations performing dot-product computation, while the peripheral circuits improve the macros' parallelism and utilization for practical applications. Vesti supports large-scale DNNs with configurable activation precision, substantially improving chip-level energy-efficiency with favorable accuracy trade-off. The Vesti accelerator is designed and laid out in 65 nm CMOS, demonstrating ultra-low energy consumption of less than <20nJ for MNIST classification and <40 mu J for CIFAR-10 classification at 1.0V supply.
More
Translated text
Key words
In-memory computing, SRAM, deep learning accelerator, deep neural networks, double-buffering
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined