Trained to Leak: Hiding Trojan Side-Channels in Neural Network Weights.

IEEE International Symposium on Hardware Oriented Security and Trust(2024)

Cited 0|Views3
No score
Abstract
Applications driven by neural networks (NNs) have been advancing various work flows in industries and everyday life. FPGA accelerators are a popular low latency solution for NN inference in the cloud, edge devices and critical systems, offering efficiency and availability. Additionally, cloud FPGAs enable maximizing resource utilization by sharing one device with multiple users in a multi-tenant scenario. However, due to the high energy costs, hardware requirements and time consumption for training an NN, using machine learning services or acquiring pre-trained models has become increasingly popular. This creates a trust issue that potentially puts the privacy of the user at risk. Specifically, malicious mechanisms may be hidden in the weights of the NN. We show that by manipulating the training process of an NN, the power consumption and resulting leakage can be manipulated to correlate strongly with the networks output, allowing the reliable recovery of the classification results through remote power side-channel analysis. In comparison to power traces from a benign model, which leak less information, our trained-in Trojan Side-Channel enhances the credibility and reliability of the stolen outputs, making them more usable and valuable for malicious intent.
More
Translated text
Key words
Neural Network Accelerators,Power-Side- Channel,Neural Trojan,Trojan Side-Channel
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined