Tolerate Failures of the Visual Camera With Robust Image Classifiers.

IEEE Access(2023)

引用 0|浏览10
暂无评分
摘要
Deep Neural Networks (DNNs) have become an enabling technology for building accurate image classifiers, and are increasingly being applied in many ICT systems such as autonomous vehicles. Unfortunately, classifiers can be deceived by images that are altered due to failures of the visual camera, preventing the proper execution of the classification process. Therefore, it is of utmost importance to build image classifiers that can guarantee accurate classification even in the presence of such camera failures. This study crafts classifiers that are robust to failures of the visual camera by augmenting the training set with artificially altered images that simulate the effects of such failures. Such a data augmentation approach improves classification accuracy with respect to the most common data augmentation approaches, even in the absence of camera failures. To provide experimental evidence for our claims, we exercise three DNN image classifiers on three image datasets, in which we inject the effects of many failures into the visual camera. Finally, we applied eXplainable AI to debate why classifiers trained with the data augmentation approach proposed in this study can tolerate failures of the visual camera.
更多
查看译文
关键词
Visual camera failures,deep learning,data augmentation,robustness,traffic sign recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要