Accelerator-friendly neural-network training: Learning variations and defects in RRAM crossbar.

DATE(2017)

引用 223|浏览104
暂无评分
摘要
RRAM crossbar consisting of memristor devices can naturally carry out the matrix-vector multiplication; it thereby has gained a great momentum as a highly energy-efficient accelerator for neuro-morphic computing. The resistance variations and stuck-at faults in the memristor devices, however, dramatically degrade not only the chip yield, but also the classification accuracy of the neural-networks running on the RRAM crossbar. Existing hardware-based solutions cause enormous overhead and power consumption, while software-based solutions are less efficient in tolerating stuck-at faults and large variations. In this paper, we propose an accelerator-friendly neural-network training method, by leveraging the inherent self-healing capability of the neural-network, to prevent the large-weight synapses from being mapped to the abnormal memristors based on the fault/variation distribution in the RRAM crossbar. Experimental results show the proposed method can pull the classification accuracy (10%--45% loss in previous works) up close to ideal level with ≤ 1% loss.
更多
查看译文
关键词
accelerator-friendly neural-network training,RRAM crossbar defect learning,RRAM crossbar variation learning,matrix-vector multiplication,neuromorphic computing,hardware-based solutions,power consumption,software-based solutions,abnormal memristors,fault-variation distribution
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要