RNN-Test: Towards Adversarial Testing for Recurrent Neural Network Systems

IEEE Transactions on Software Engineering(2022)

引用 13|浏览50
暂无评分
摘要
While massive efforts have been investigated in adversarial testing of convolutional neural networks (CNN), testing for recurrent neural networks (RNN) is still limited and leaves threats for vast sequential application domains. In this paper, we propose an adversarial testing framework RNN-Test for RNN systems, focusing on sequence-to-sequence (seq2seq) tasks of widespread deployments, not only classification domains. First, we design a novel search methodology customized for RNN models by maximizing the inconsistency of RNN states against their inner dependencies to produce adversarial inputs. Next, we introduce two state-based coverage metrics according to the distinctive structure of RNNs to exercise more system behaviors. Finally, RNN-Test solves the joint optimization problem to maximize state inconsistency and state coverage, and crafts adversarial inputs for various tasks of different kinds of inputs. For evaluations, we apply RNN-Test on four RNN models of common structures. On the tested models, the RNN-Test approach is demonstrated to be competitive in generating adversarial inputs, outperforming FGSM-based and DLFuzz-based methods to reduce the model performance more sharply with 2.78% to 37.94% higher success (or generation) rate. RNN-Test could also achieve 52.65% to 66.45% higher adversary rate than testRNN on MNIST LSTM model, as well as 53.76% to 58.02% more perplexity with 16% higher generation rate than DeepStellar on PTB language model.Compared with the traditional neuron coverage, the proposed state coverage metrics as guidance excel with 4.17% to 97.22% higher success (or generation) rate.
更多
查看译文
关键词
Adversarial testing,recurrent neural networks,coverage metrics
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要