Real-Time Semantic Plane Reconstruction on a Monocular Drone Using Sparse Fusion

IEEE Transactions on Vehicular Technology(2019)

Cited 9|Views41
No score
Abstract
A semantic map, which can provide important data to facilitate a drone's understanding to the environment, is critical for a fully autonomous drone. However, recent methods for producing such a map implemented on small drones that use low-/middle-grade processors and graphics processing units can hardly achieve real-time performance. In addition, few existing methods can reconstruct semantic planes based on a sparse depth map, which can greatly reduce computational load while encountering challenges in semantic reconstruction. To address this problem, this paper presents a novel on-board approach, called sparse fusion, which demonstrates real-time reconstruction of a semantic plane on a self-designed small drone. The approach combines the sparse depth map derived from a visual–inertial simultaneous localization and mapping with the semantic labels derived from a convolutional neural network for each frame using sparse fusion. Our proposed local plane optimization function greatly improves the accuracy of the semantic plane. Experimental results on various scenarios demonstrate that our sparse fusion module running on the drone platform can update the semantic plane within 1 ms, which is faster and achieves greater accuracy than the state-of-the-art real-time semantic reconstruction method. We also conducted experiments in real environments to demonstrate the performance of our method.
More
Translated text
Key words
Semantics,Simultaneous localization and mapping,Drones,Cameras,Feature extraction,Optimization
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined