A Classification-Guided Approach for Adversarial Attacks against Neural Machine Translation
Conference of the European Chapter of the Association for Computational Linguistics(2023)
摘要
Neural Machine Translation (NMT) models have been shown to be vulnerable to
adversarial attacks, wherein carefully crafted perturbations of the input can
mislead the target model. In this paper, we introduce ACT, a novel adversarial
attack framework against NMT systems guided by a classifier. In our attack, the
adversary aims to craft meaning-preserving adversarial examples whose
translations in the target language by the NMT model belong to a different
class than the original translations. Unlike previous attacks, our new approach
has a more substantial effect on the translation by altering the overall
meaning, which then leads to a different class determined by an oracle
classifier. To evaluate the robustness of NMT models to our attack, we propose
enhancements to existing black-box word-replacement-based attacks by
incorporating output translations of the target NMT model and the output logits
of a classifier within the attack process. Extensive experiments, including a
comparison with existing untargeted attacks, show that our attack is
considerably more successful in altering the class of the output translation
and has more effect on the translation. This new paradigm can reveal the
vulnerabilities of NMT systems by focusing on the class of translation rather
than the mere translation quality as studied traditionally.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要