228 x 304 200-mW lidar based on a single-point global-depth d-ToF sensor and RGB-guided super-resolution neural network

Optics letters(2023)

Cited 0|Views0
No score
Abstract
The cutting-edge imaging system exhibits low output resolution and high power consumption, presenting challenges for the RGB-D fusion algorithm. In practical scenarios, aligning the depth map resolution with the RGB image sensor is a crucial requirement. In this Letter, the software and hardware co-design is considered to implement a lidar system based on the monocular RGB 3D imaging algorithm. A 6.4x6.4-mm(2) deep-learning accelerator (DLA) system-on-chip (SoC) manufactured in a 40-nm CMOS is incorporated with a 3.6-mm(2) TX-RX integrated chip fabricated in a 180-nm CMOS to employ the customized single-pixel imaging neural network. In comparison to the RGB-only monocular depth estimation technique, the root mean square error is reduced from 0.48 m to 0.3 m on the evaluated dataset, and the output depth map resolution matches the RGB input. (c) 2023 Optica Publishing Group
More
Translated text
Key words
lidar,single-point,global-depth,d-tof,rgb-guided,super-resolution
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined