Similarity-based optimised and adaptive adversarial attack on image classification using neural network.

Balika J. Chelliah, Mohammad Mustafa Malik, Ashwin Kumar, Nitin Singh,Rajan Regin

Int. J. Intell. Eng. Informatics(2023)

引用 0|浏览4
暂无评分
摘要
Image classification, natural language processing (NLP), and speech recognition have embraced deep learning (DL) techniques. Unrealistic adversarial samples dominate model security research. True hostile attacks are worryingly understudied. These attacks compromise real-world applications. This technique helps comprehend adversarial resistance in real-world challenges. We use real-world cases and data to test whether unreal hostile samples can protect models from genuine samples. Nodal dropouts from the first convolutional layer reveal weak and steady deep-learning neurons. Adversarial targeting links neurons to network adversaries. Neural network adversarial resilience is popular. Its DL network fails to skilfully manipulate input photographs. Our results show that unrealistic examples are as successful as realistic ones or give small enhancements. Second, we investigate the hidden representation of adversarial instances with realistic and unrealistic attacks to explain these results. We showed examples of unrealistic samples used for similar purposes and helped future studies bridge realistic and unrealistic adversarial approaches, and we released the code, datasets, models, and findings.
更多
查看译文
关键词
deep neural network, DNN, interactive gradient shielding, generative adversarial networks, adversarial samples
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要