Crafting Adversarial Examples on 3D Object Detection Sensor Fusion Models

semanticscholar(2020)

引用 1|浏览0
暂无评分
摘要
A critical aspect of autonomous vehicles is the object detection stage, which is increasingly being performed with what are called sensor fusion models: 3D object detection models which take in both 2D RGB image data and 3D depth data (like from a LIDAR sensor) as inputs. However, while there has been lots of work on the performance of these models, their security, particularly against adversarial examples, has not yet been explored. In this work, we perform the first preliminary study to analyze the robustness of a popular sensor fusion model architecture towards adversarial attacks. We find that despite the use of the 3D data, simply modifying the image via our raw-pixel attack is enough to fool the model and cause objects to disappear. We picked 28 random samples with 119 vehicles from the KITTI dataset and show that our raw pixel disappearance attack is able to generate successful adversarial examples against 133 of those images. We extend this attack and develop a modified algorithm to create generalizable adversarial patches that can fool multiple vehicles. To better understand this performance against adversarial examples, we run experiments that show the model learns to rely on the LIDAR input more than the image input, suggesting the image input can prove to be an ”Achilles’ heel” against adversarial examples.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要