Preliminary Results on Distribution Shift Performance of Deep Networks for Synthetic Aperture Sonar Classification

OCEANS 2022, Hampton Roads(2022)

引用 1|浏览2
暂无评分
摘要
We demonstrate the ease by which deep networks are fooled by sonar-relevant distribution shifts induced by sonar-relevant transforms, imaging errors common to SAS, and unseen target/background combinations. Furthermore, the generated images fooling the networks are trivial for human operators to interpret. We posit this disconnect between human and machine performance is an open area of research and, when reconciled, improves human-machine trust. Our goal with this work is to begin discerning where deep network performance (specifically convolutional neural networks (CNNs)) deteriorates and how their perception model differs from humans. Specifically, we show network performance varies widely across contemporary architectures and training schemes on: (1) images derived from a set of sonar-relevant transformations, which we call semantically stable, (2) imagery perturbed with quadratic phase error (common to SAS), and (3) a synthetic target dataset created by injecting real targets into unseen real backgrounds. Finally, we delineate the relationship between spatial frequency and network performance and find many networks rely almost exclusively on low-frequency content to make their predictions. These results may help illuminate why changes to a sonar system or simulation sometimes necessitate complete network retraining to accommodate the “new” data; a time consuming process. Consequently, we hope this work stimulates future research bridging the gap between human and machine perception in the space of automated SAS image interpretation.
更多
查看译文
关键词
synthetic aperture sonar,machine perception,machine learning,deep learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要