Chrome Extension
WeChat Mini Program
Use on ChatGLM

A Ternary Neural Network Computing-in-Memory Processor With 16T1C Bitcell Architecture

IEEE Transactions on Circuits and Systems II: Express Briefs(2023)

Cited 0|Views7
No score
Abstract
A highly energy-efficient Computing-in-Memory (CIM) processor for Ternary Neural Network (TNN) acceleration is proposed in this brief. Previous CIM processors for multi-bit precision neural networks showed low energy efficiency and throughput. Lightweight binary neural networks were accelerated with CIM processors for high energy efficiency but showed poor inference accuracy. In addition, most previous works suffered from poor linearity of analog computing and energy-consuming analog-to-digital conversion. To resolve the issues, we propose a Ternary-CIM (T-CIM) processor with 16T1C ternary bitcell for good linearity with the compact area and a charge-based partial sum adder circuit to remove analog-to-digital conversion that consumes a large portion of the system energy. Furthermore, flexible data mapping enables execution of the whole convolution layers with smaller bitcell memory capacity. Designed with 65 nm CMOS technology, the proposed T-CIM achieves 1,316 GOPS of peak performance and 823 TOPS/W of energy efficiency.
More
Translated text
Key words
Computer architecture,Throughput,Neural networks,Linearity,Energy efficiency,Common Information Model (computing),Transistors,SRAM,computing-in-memory (CIM),processing-in-memory (PIM),ternary neural network (TNN),analog computing
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined