Semi-Fragile Neural Network Watermarking Based on Adversarial Examples

IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE(2024)

引用 0|浏览3
暂无评分
摘要
Deep neural networks (DNNs) may be subject to various modifications during transmission and use. Regular processing operations do not affect the functionality of a model, while malicious tampering will cause serious damage. Therefore, it is crucial to determine the availability of a DNN model. To address this issue, we propose a semi-fragile black-box watermarking method that can distinguish between accidental modification and malicious tampering of DNNs, focusing on the privacy and security of neural network models. Specifically, for a given model, a strategy is designed to generate semi-fragile and sensitive samples using adversarial example techniques without decreasing the model accuracy. The model outputs for these samples are extremely sensitive to malicious tampering and robust to accidental modification. According to these properties, accidental modification and malicious tampering can be distinguished to assess the availability of a watermarked model. Extensive experiments demonstrate that the proposed method can detect malicious model tampering with high accuracy up to 100% while tolerating accidental modifications such as fine-tuning, pruning, and quantitation with the accuracy exceed 75%. Moreover, our semi-fragile neural network watermarking approach can be easily extended to various DNNs.
更多
查看译文
关键词
Semi-fragile watermarking,neural network,black-box,malicious tampering,privacy and security
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要