Mutation Testing based Safety Testing and Improving on DNNs.

Yuhao Wei,Song Huang, Yu Wang,Ruilin Liu,Chunyan Xia

QRS(2022)

引用 0|浏览16
暂无评分
摘要
In recent years, deep neural networks (DNNs) have made great progress in people’s daily life since it becomes easier for data accessing and labeling. However, DNN has been proven to behave uncertainly, especially when facing small perturbations in their input data, which becomes a limitation for its application in self-driving and other safety-critical fields. Those human-made attacks like adversarial attacks would cause extremely serious consequences. In this work, we design and evaluate a safety testing method for DNNs based on mutation testing, and propose an adversarial training method based on testing results and joint optimization. First, we conduct an adversarial mutation on the test datasets and measure the performance of models in response to the adversarial samples by mutation scores. Next, we evaluate the validity of mutation scores as a quantitative indicator of safety by comparing DNN models and their updated versions. Finally, we construct a joint optimization problem with safety scores for adversarial training, thus improving the safety of the model as well as the generalizability of the defense capability.
更多
查看译文
关键词
safety of DNNs,adversarial,mutation testing
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要