A Bit-Serial, Compute-in-SRAM Design Featuring Hybrid-Integrating ADCs and Input Dependent Binary Scaled Precharge Eliminating DACs for Energy-Efficient DNN Inference

IEEE JOURNAL OF SOLID-STATE CIRCUITS(2023)

引用 3|浏览9
暂无评分
摘要
major challenge faced by modern compute-in-memory (CIM) designs is that they rely heavily on mixed-signal data converters such as digital-to-analog converters (DACs) and analog-to-digital converters (ADCs) that contribute to similar to 15% area and similar to 50% energy of the overall macro and are susceptible to non-linearities, leakage, and process variations, which causes deep neural network (DNN) inference/training accuracy loss. As DNN models increase in size, the number of DACs steps required per inference increases exponentially. This work proposes a four-pronged approach to address the challenges in CIM designs: 1) binary-weighted-bitline-precharge scheme utilizing dedicated reference voltages to perform input bit-serial multiplication in the charge domain, eliminating the need for dedicated DAC circuits; 2) leakage-tolerant, input-dependent-bitline-keeper circuits that maintain the local-bitline voltages; 3) hybrid-charge-sharing-based integrating-ADCs to improve the ADC conversion time by leveraging the reference voltages, thereby improving ADC latency while achieving a compact ADC design; and 4) efficient data movement and utilization of analog-to-digital co-computation. Fabricated in TSMC 65 nm, the experimental results for the compute-in-static-RAM (CISRAM) silicon prototype show an average macro energy efficiency of 153-2453.76 TOPs/W. The average macro energy efficiency is 2.3x more than the latest state of the art mentioned in the comparison table.
更多
查看译文
关键词
Common Information Model (computing),Random access memory,Sensors,SRAM cells,Memory management,Deep learning,Neural networks,Artificial intelligence (AI),binary precharge,compute-in-memory (CIM),digital-to-analog converter (DAC)-less CIM,hardware (HW) accelerator,inference,in-memory computing,near-memory computing,static RAM (SRAM)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要