Exploration of Bitflips Effect on Deep Neural Network Accuracy in Plaintext and Ciphertext

IEEE MICRO(2023)

引用 0|浏览0
暂无评分
摘要
Neural networks (NNs) are increasingly deployed to solve complex classification problems and produce accurate results on reliable systems. However, their accuracy quickly degrades in the presence of bit flips from memory errors or targeted attacks on dynamic random-access main memory. Prior work has shown that a few bit errors significantly reduce NN accuracies, but it is unclear which bits have an outsized impact on network accuracy and why. This article first investigates the relationship of the number representation for NN parameters with the impacts of bit flips on NN accuracy. We then explore the bit flip detection framework- four software-based error detectors that detect bit flips independent of NN topology. We discuss exciting findings and evaluate the various detectors' efficacy, characteristics, and tradeoffs.
更多
查看译文
关键词
Artificial neural networks,Error correction codes,Random access memory,Encryption,Elliptic curve cryptography,Detectors,Threat modeling,Neural networks,Text categorization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要