Dot-product engine for neuromorphic computing: programming 1T1M crossbar to accelerate matrix-vector multiplication.

DAC(2016)

引用 708|浏览442
暂无评分
摘要
Vector-matrix multiplication dominates the computation time and energy for many workloads, particularly neural network algorithms and linear transforms (e.g, the Discrete Fourier Transform). Utilizing the natural current accumulation feature of memristor crossbar, we developed the Dot-Product Engine (DPE) as a high density, high power efficiency accelerator for approximate matrix-vector multiplication. We firstly invented a conversion algorithm to map arbitrary matrix values appropriately to memristor conductances in a realistic crossbar array, accounting for device physics and circuit issues to reduce computational errors. The accurate device resistance programming in large arrays is enabled by close-loop pulse tuning and access transistors. To validate our approach, we simulated and benchmarked one of the state-of-the-art neural networks for pattern recognition on the DPEs. The result shows no accuracy degradation compared to software approach (99 % pattern recognition accuracy for MNIST data set) with only 4 Bit DAC/ADC requirement, while the DPE can achieve a speed-efficiency product of 1,000× to 10,000× compared to a custom digital ASIC.
更多
查看译文
关键词
dot-product engine,neuromorphic computing,1T1M crossbar programming,matrix-vector multiplication,vector-matrix multiplication,neural network algorithms,discrete Fourier transform,memristor crossbar,DPE,memristor conductances,crossbar array,device resistance programming,close-loop pulse tuning,pattern recognition,custom digital ASIC
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要