Object Semantics Give Us the Depth We Need: Multi-task Approach to Aerial Depth Completion

CoRR(2023)

引用 0|浏览0
暂无评分
摘要
Depth completion and object detection are two crucial tasks often used for aerial 3D mapping, path planning, and collision avoidance of Uncrewed Aerial Vehicles (UAVs). Common solutions include using measurements from a LiDAR sensor; however, the generated point cloud is often sparse and irregular and limits the system's capabilities in 3D rendering and safety-critical decision-making. To mitigate this challenge, information from other sensors on the UAV (viz., a camera used for object detection) is utilized to help the depth completion process generate denser 3D models. Performing both aerial depth completion and object detection tasks while fusing the data from the two sensors poses a challenge to resource efficiency. We address this challenge by proposing a novel approach to jointly execute the two tasks in a single pass. The proposed method is based on an encoder-focused multi-task learning model that exposes the two tasks to jointly learned features. We demonstrate how semantic expectations of the objects in the scene learned by the object detection pathway can boost the performance of the depth completion pathway while placing the missing depth values. Experimental results show that the proposed multi-task network outperforms its single-task counterpart, particularly when exposed to defective inputs.
更多
查看译文
关键词
Depth Completion,Object Detection,Point Cloud,Unmanned Aerial Vehicles,Path Planning,Depth Values,Single Pass,Multi-task Learning,Objects In The Scene,Object Detection Task,LiDAR Sensor,Multi-task Model,Point Cloud Generation,Multi-task Network,Root Mean Square Error,Convolutional Neural Network,Convolutional Layers,Feature Maps,Bounding Box,Depth Map,RGB Images,Feature Pyramid Network,Deep Learning-based Methods,Network Behavior,Auxiliary Task,Uncertainty Map,Region Proposal Network,Surface Normals,Geometric Features
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要