7.8 A 22nm Delta-Sigma Computing-In-Memory (Δ∑CIM) SRAM Macro with Near-Zero-Mean Outputs and LSB-First ADCs Achieving 21.38TOPS/W for 8b-MAC Edge AI Processing

2023 IEEE International Solid- State Circuits Conference (ISSCC)(2023)

引用 5|浏览38
暂无评分
摘要
In Al-edge devices, the changes of input features are normally progressive or occasional, e.g., abnormal surveillance, hence the reprocessing of unchanged data consumes a tremendously redundant amount of energy. Computing-in-memory (CIM) directly executes matrix-vector multiplications (MVMs) in memory, eliminating costly data movement energy in deep neural networks (DNNs) [2–6]. Prior CIM work only explored the sparsity of DNNs to improve energy efficiency, but the trend of employing non-sparse activation functions, e.g., leaky ReLU, degrade the benefits of leveraging sparsity [1]. Even if sparsity can be exploited, the redundant unchanged input features in analog CIM still consume massive amount of dynamic power (Fig. 7.8.1). From a circuit point-of-view, the energy consumption of analog CIMs is dominated by full-precision ADCs. In different DNN applications, the mean of analog CIM outputs is unpredictable and fluctuating, which requires the ADC to have a high dynamic range to guarantee coverage, introducing a high-power overhead.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要