LexiGuard: Elevating NLP robustness through effortless adversarial fortification

Advances in Engineering Innovation(2023)

引用 0|浏览0
暂无评分
摘要
NLP models have demonstrated susceptibility to adversarial attacks, thereby compromising their robustness. Even slight modifications to input text possess the capacity to deceive NLP models, leading to inaccurate text classifications. In the present investigation, we introduce Lexi-Guard: an innovative method for Adversarial Text Generation. This approach facilitates the rapid and efficient generation of adversarial texts when supplied with initial input text. To illustrate, when targeting a sentiment classification model, the utilization of product categories as attributes is employed, ensuring that the sentiment of reviews remains unaltered. Empirical assessments were conducted on real-world NLP datasets to showcase the efficacy of our technique in producing adversarial texts that are both more semantically meaningful and exhibit greater diversity, surpassing the capabilities of numerous existing adversarial text generation methodologies. Furthermore, we leverage the generated adversarial instances to enhance models through adversarial training, demonstrating the heightened resilience of our generated attacks against model retraining endeavors and diverse model architectures.
更多
查看译文
关键词
elevating nlp robustness,effortless
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要