Enhanced Radar Perception via Multi-Task Learning: Towards Refined Data for Sensor Fusion Applications

Huawei Sun, Hao Feng, Gianfranco Mauro, Julius Ott, Georg Stettinger, Lorenzo Servadei, Robert Wille

CoRR(2024)

Cited 0|Views16
No score
Abstract
Radar and camera fusion yields robustness in perception tasks by leveraging the strength of both sensors. The typical extracted radar point cloud is 2D without height information due to insufficient antennas along the elevation axis, which challenges the network performance. This work introduces a learning-based approach to infer the height of radar points associated with 3D objects. A novel robust regression loss is introduced to address the sparse target challenge. In addition, a multi-task training strategy is employed, emphasizing important features. The average radar absolute height error decreases from 1.69 to 0.25 meters compared to the state-of-the-art height extension method. The estimated target height values are used to preprocess and enrich radar data for downstream perception tasks. Integrating this refined radar information further enhances the performance of existing radar camera fusion models for object detection and depth estimation tasks.
More
Translated text
Key words
Multi-task Learning,Object Detection,Perceptual Task,Height Values,Radar Data,Depth Estimation,Object Detection Task,Height Information,Average Absolute Error,Height Of Point,Robust Loss,Height Error,Loss Function,Free Space,Image Plane,Weighting Factor,Point Values,3D Space,Bounding Box,Model Architecture,L1 Loss,Height Estimation,Height Map,L2 Loss,Huber Loss,Ground Truth Map,Radar Cross Section,Radar Images,Radar Sensor,Sparse Regression
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined