Adversarial Barrel! An Evaluation of 3D Physical Adversarial Attacks

2022 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)(2022)

引用 0|浏览0
暂无评分
摘要
Computer vision models based on Deep Neural Networks (DNNs) are vulnerable to adversarial attacks. It has also been demonstrated that physical adversarial attacks can affect computer vision models though printed medium or physical 3D objects. However, the efficacy of physical adversarial attacks, is highly variable under real-world conditions. In this research, we leverage a synthetic validation environment to evaluate 2D and 3D physical adversarial attacks on state-of the-art object detection models (Faster-RCNN, RetinaNet, YOLOv3, YOLOv4). Using the Unreal Engine, we create synthetic environments to evaluate the limitations of physical adversarial attacks. We evaluate 2D adversarial patches under varying lighting conditions and poses. We optimize the same adversarial attacks for 3D shapes including a pyramid, a cube, and a barrel (cylinder), and evaluate the robustness of the 3D physical attacks against the 2D attack baseline. We test our attacks against object-detection models trained on MSCOCO, VIRAT, VISDRONE, and synthetic datasets. By advancing physical adversarial attacks and validation-methodology, we improve our ability to red-team computer vision models with a goal toward defending and assuring AI systems used in fields like Transportation, and Security.
更多
查看译文
关键词
adversarial machine learning,ML,attacks,baseline,computer vision,AI,security
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要