A Self-Calibrated Activation Neuron Topology for Efficient Resistive-Based In-Memory Computing

2023 IFIP/IEEE 31ST INTERNATIONAL CONFERENCE ON VERY LARGE SCALE INTEGRATION, VLSI-SOC(2023)

引用 0|浏览1
暂无评分
摘要
In-Memory Computing (IMC) accelerators based on resistive crossbars are emerging as a promising pathway toward improved energy efficiency in artificial neural networks. While significant research efforts are directed toward designing advanced resistive memory devices, the nonidealities associated with practical device implementation are often overlooked. Existing solutions typically compensate for these nonidealities during off-chip training, introducing additional complexities and failing to account for random errors such as noise, device failures, and cycle-to-cycle variability. To tackle this challenge, this work proposes a self-calibrated activation neuron topology that offers a fully online non-linearity compensation for IMC accelerators. The neuron merges multiply-accumulate operations with Rectified Linear Unit (ReLU) activation function in the analog domain for increased efficiency. The self-calibration is integrated into the data conversion process to minimize overheads and be fully online. The proposed activation neuron is designed and simulated using 22 nm FDSOI CMOS technology. The design demonstrates robustness across a wide temperature range (-40 degrees C to 80 degrees C) and under various process corners, with a maximum accuracy loss of 1 LSB for an 8-bit activation accuracy.
更多
查看译文
关键词
Edge AI,In-memory computing,resistive crossbars,artificial neural networks,on-chip PVT compensation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要