Chrome Extension
WeChat Mini Program
Use on ChatGLM

Evaluation of Robustness of Off-Road Autonomous Driving Segmentation Against Adversarial Attacks: A Dataset-Centric Study.

AAMAS '24 Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems(2024)

Cited 0|Views9
No score
Abstract
This study investigates the vulnerability of semantic segmentation models toadversarial input perturbations, in the domain of off- road autonomous driving.Despite good performance in generic conditions, the state-of-the-artclassifiers are often susceptible to (even) small perturbations, ultimatelyresulting in inaccurate predic- tions with high confidence. Prior research hasdirected their focus on making models more robust by modifying the architectureand training with noisy input images, but has not explored the influence ofdatasets in adversarial attacks. Our study aims to address this gap byexamining the impact of non-robust features in off-road datasets and comparingthe effects of adversarial attacks on different seg- mentation networkarchitectures. To enable this, a robust dataset is created consisting of onlyrobust features and training the net- works on this robustified dataset. Wepresent both qualitative and quantitative analysis of our findings, which haveimportant impli- cations on improving the robustness of machine learning modelsin off-road autonomous driving applications. Additionally, this workcontributes to the safe navigation of autonomous robot Unimog U5023 in roughoff-road unstructured environments by evaluating the robustness of segmentationoutputs. The code is publicly avail- able at https:// github.com/ rohtkumar/adversarial_attacks_ on_segmentation
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined