Chrome Extension
WeChat Mini Program
Use on ChatGLM

A Method for Reverse Engineering Neural Network Parameters from Compute-in-Memory Accelerators

2022 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)(2022)

Cited 0|Views2
No score
Abstract
Recent work has shown that on-chip memory read-out is possible through Photonic Emission Analysis (PEA), a semi-invasive Side-Channel Attack (SCA) that can reveal SRAM cell values. These attacks are of significant concern to machine learning hardware accelerators that store neural network parameters in on-chip memory as the parameters (weights and biases) take significant engineering time and money to train. Inference-only, compute-in-memory (CIM) accelerators based on emerging non-volatile memory (eNVM) devices appear to be resistant to such attacks, as eNVM cells do not emit photons during the read operation or in the off state. Despite this intrinsic security, this work will show that these accelerators are still vulnerable to reverse engineering, as PEA can be used on the peripheral circuitry that buffer the input and output to the eNVM memory array. With this information alone, the weights and biases of the network can be discovered with 99% accuracy, even in the presence of significant noise. Experiments are simulated based on results gathered from a RRAM-based, 1T1R CIM macro implemented on TSMC 40nm process.
More
Translated text
Key words
compute-in-memory,weight stealing,reverse engineering,machine learning accelerators
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined