Time-Domain Computing in Memory Using Spintronics for Energy-Efficient Convolutional Neural Network

IEEE Transactions on Circuits and Systems I: Regular Papers(2021)

引用 36|浏览29
暂无评分
摘要
The data transfer bottleneck in Von Neumann architecture owing to the separation between processor and memory hinders the development of high-performance computing. The computing in memory (CIM) concept is widely considered as a promising solution for overcoming this issue. In this article, we present a time-domain CIM (TD-CIM) scheme using spintronics, which can be applied to construct the energy-efficient convolutional neural network (CNN). Basic Boolean logic operations are implemented through recording the bit-line output at different moments. A multi-addend addition mechanism is then introduced based on the TD-CIM circuit, which can eliminate the cascaded full adders. To further optimize the compatibility of TD-CIM circuit for CNN, we also propose a quantization method that transforms floating-point parameters of pre-trained CNN models into fixed-point parameters. Finally, we build a TD-CIM architecture integrating with a highly reconfigurable array of field-free spin-orbit torque magnetic random access memory (SOT-MRAM) and evaluate its benefits for the quantized CNN. By performing digit recognition with the MNIST dataset, we find that the delay and energy are respectively reduced by 1.22.7 times and 2.4×10 3 -1.1×10 4 times compared with STT-CIM and CRAM based on spintronic memory. Finally, the recognition accuracy can reach 98.65% and 91.11% on MNIST and CIFAR10, respectively.
更多
查看译文
关键词
Computing in memory,time-domain,spintronics,digit recognition,convolutional neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要