A Learnable Image Compression Scheme for Synthetic Aperture Sonar Imagery

OCEANS 2021: San Diego – Porto(2021)

引用 3|浏览0
暂无评分
摘要
Synthetic aperture sonar (SAS) is an imaging modality which produces high and constant resolution images of the seafloor. These sonars are often mounted to a unmanned underwater vehicle (UUV) to autonomously collect imagery of a prescribed survey area. While a survey is underway, UUV communications back to the operator are often limited due to the use of a low-bandwidth acoustic communications (ACOMMS) channel. Because of this, high-quality SAS imagery is rarely sent over this link due to the lack of an efficient compression scheme to send such information. Creating an efficient SAS image compression scheme provides at least two operational benefits: (1) image chips beamformed and tagged by onboard processing algorithms can be quickly communicated to operators while a survey is ongoing, and (2) cooperative UUVs can exchange salient image chips among themselves to reconcile position ambiguity and obtain a shared reference frame. In this work we propose a learned image compression scheme for SAS imagery using deep neural networks (DNNs). DNNs have already been applied to the image compression problem but almost exclusively for optical imagery. We highlight some important differences between SAS imagery and optical imagery which prevents the simple application of off-the-shelf (OTS) methods like JPEG and WebP to SAS imagery. We propose an image compression scheme which specifically addresses the domain-specific properties of SAS imagery to obtain useful image compression performance on a real-world SAS dataset. We show that we can reduce the bitrate by up to thirty-five percent while still maintaining the same perceptual image quality as OTS codecs.
更多
查看译文
关键词
Synthetic aperture sonar (SAS),image Compression,deep learning,unmanned underwater vehicles,acoustic communications (ACOMMS)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要