I Know What You See: Power Side-Channel Attack on Convolutional Neural Network Accelerators

ACSAC '18: Proceedings of the 34th Annual Computer Security Applications Conference(2018)

引用 214|浏览463
暂无评分
摘要
Deep learning has become the de-facto computational paradigm for various kinds of perception problems, including many privacy-sensitive applications such as online medical image analysis. No doubt to say, the data privacy of these deep learning systems is a serious concern. Different from previous research focusing on exploiting privacy leakage from deep learning models, in this paper, we present the first attack on the implementation of deep learning models. To be specific, we perform the attack on an FPGA-based convolutional neural network accelerator and we manage to recover the input image from the collected power traces without knowing the detailed parameters in the neural network. For the MNIST dataset, our power side-channel attack is able to achieve up to 89
更多
查看译文
关键词
Power side-channel attack,convolutional neural accelerators,privacy leakage
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要