Fake it till you break it: Evaluating the Performance of Synthetically-optimized Adversarial Patches Against Real-world Imagery

GEOSPATIAL INFORMATICS XIII(2023)

引用 0|浏览0
暂无评分
摘要
Deep neural networks (DNNs), enabled by massive open datasets like ImageNet, have produced impressive results in a wide range of fields and applications. ImageNet, a database of over 15 million high-resolution images categorized into 22,000 categories, has revolutionized the field of computer vision with state-of-the-art models achieving 98% accuracy. However, this performance comes at a cost. Recent advances in adversarial machine learning have revealed inherent vulnerabilities in DNN-based models. Adversarial patches have been successfully used to disrupt the performance of artificial intelligence (AI) systems that leverage DNN-based computer vision models, but the trade space of these attacks is not fully understood; adversarial attack generation and validation methods are still nascent. In this paper we explore the generation and performance of synthetically-trained attacks against models trained on real data like MSCOCO, VIRAT and VisDrone. Using a synthetic environment tool built on the Unreal Engine, we generate a synthetic dataset consisting of pedestrians and vehicles, train synthetic object detection models, and optimize adversarial patch attacks on the synthetic feature space of those models. We then apply our synthetic attacks to real image data and examine the efficacy of synthetic patch attacks against models trained on real-word image data. The implications of synthetically optimized attacks are broad: a much larger attack surface for DNN-based computer vision models, development of simulation-based validation pipelines, more effective attacks, and stronger defenses against adversarial examples.
更多
查看译文
关键词
adversarial machine learning, ML, attacks, baseline, computer vision, AI, security
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要