Deep Learning model integrity checking mechanism using watermarking technique

CoRR(2023)

引用 0|浏览56
暂无评分
摘要
In response to the growing popularity of Machine Learning (ML) techniques to solve problems in various industries, various malicious groups have started to target such techniques in their attack plan. However, as ML models are constantly updated with continuous data, it is very hard to monitor the integrity of ML models. One probable solution would be to use hashing techniques. Regardless of how that would mean re-hashing the model each time the model is trained on newer data which is computationally expensive and not a feasible solution for ML models that are trained on continuous data. Therefore, in this paper, we propose a model integrity-checking mechanism that uses model watermarking techniques to monitor the integrity of ML models. We then demonstrate that our proposed technique can monitor the integrity of ML models even when the model is further trained on newer data with a low computational cost. Furthermore, the integrity checking mechanism can be used on Deep Learning models that work on complex data distributions such as Cyber-Physical System applications.
更多
查看译文
关键词
deep learning model,watermarking
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要