RobustNLP: A Technique to Defend NLP Models Against Backdoor Attacks

arxiv(2023)

引用 0|浏览3
暂无评分
摘要
As machine learning (ML) systems are being increasingly employed in the real world to handle sensitive tasks and make decisions in various fields, the security and privacy of those models have also become increasingly critical. In particular, Deep Neural Networks (DNN) have been shown to be vulnerable to backdoor attacks whereby adversaries have access to the training data and the opportunity to manipulate such data by inserting carefully developed samples into the training dataset. Although the NLP community has produced several studies on generating backdoor attacks proving the vulnerable state of language modes, to the best of our knowledge, there does not exist any work to combat such attacks. To bridge this gap, we present RobustEncoder: a novel clustering-based technique for detecting and removing backdoor attacks in the text domain. Extensive empirical results demonstrate the effectiveness of our technique in detecting and removing backdoor triggers. Our code is available at https://github.com/marwanomar1/Backdoor-Learning-for-NLP
更多
查看译文
关键词
robustnlp models,backdoor attacks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要