Multiple modality sensor fusion from radar, lidar, and electro-optical systems using three-dimensional data representations

J. R. Jamora,Paul Sotirelis,Adam Nolan,Jeff Walrath, Rick Hubbard, Rajith Weerasinghe, Eric Young, Seth Young

ALGORITHMS FOR SYNTHETIC APERTURE RADAR IMAGERY XXIX(2022)

引用 0|浏览1
暂无评分
摘要
Commonly, data exploitation for single sensors utilizes two-dimensional (2D) imagery. To best combine information from multiple sensing modalities, each with their own fundamental differences, we utilize sensor fusion to capture and leverage the inherent weaknesses from different sensing modalities. When fusing multiple sensor modalities together, this approach quickly becomes intractable as each sensor has unique projection planes and resolution. In this work, we present and analyze a data-driven approach for fusing multiple modalities by extracting data representations for each sensor into three-dimensional (3D) space, supporting sensor fusion natively in a common frame of reference. Photogrammetry and computer vision methods for recovering point clouds, such as structure from motion and multi-view stereo, from 2D electro-optical imagery has shown promising results. Additionally, 3D data representations can also be derived from interferometric synthetic aperture radar (IFSAR) and lidar sensors. We use point cloud representations for all three modalities, which allow us to leverage each sensing modality's individual strengths and weaknesses. Given our data-driven focus, we emphasize fusing the point cloud data in controlled scenarios with known parameters. We also conduct an error analysis for each sensor modality based upon sensor position, resolution, and noise.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要