eDRAM-CIM: Reconfigurable Charge Domain Compute-In-Memory Design With Embedded Dynamic Random Access Memory Array Realizing Adaptive Data Converters

IEEE JOURNAL OF SOLID-STATE CIRCUITS(2023)

引用 0|浏览1
暂无评分
摘要
This article presents a compute-in-memory (CIM) architecture for a large-scale machine learning (ML) accelerator, which employs 1T1C embedded dynamic random access memory (eDRAM) bitcells as charge domain circuits for convolution neural network (CNN) multiply-accumulation-averaging (MAV) computations. By repurposing existing 1T1C eDRAM columns to construct an adaptive data converter, dot-product, averaging, pooling, and rectified linear unit (ReLU) activation on the memory array, the eDRAM-CIM design eliminates the need for an extra dedicated hardware accelerator, which significantly reduces the hardware implementation cost and increases the reconfigurability of the CIM computation circuit. In the eDRAM-CIM design, 8 b digital inputs from image pixel values are converted to analog domain directly on eDRAM columns. The dot-product is computed without disturbing the kernel weights, which are stored in the eDRAM array, preventing data duplication and additional intra-memory data movement. In addition, the convolution results are transferred back to the digital domain using an in-eDRAM adaptive dynamic range successive-approximation (SAR) analog-to-digital converter (ADC) utilizing a narrow range of dot-product distribution to reduce ADC latency and energy. A 16 Kb eDRAM-CIM prototype, implemented in the 65 nm CMOS process, demonstrates the concept and the functionality of the test-chip using the CIFAR-10 dataset with 8 b input and 8 b signed weight, achieving 90.77% accuracy, 4.71 GOPS throughput, 4.76 TOPS/W energy efficiency, and 8.26 GOPS/mm(2). A scalability analysis is developed to show that the presented eDRAM-CIM approach, which was adopted in an advanced eDRAM technology node, shows promising energy efficiency (11.21 TOPS/W) and throughput (22.2 TOPS), suggesting its potential for application in large-scale energy-efficient CIM designs.
更多
查看译文
关键词
Analog computation,compute-in-memory (CIM),dot-product,embedded dynamic random access memory (eDRAM),in-memory computation,machine learning (ML),ML accelerator
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要