Federated Cooperative 3D Object Detection for Autonomous Driving

2023 IEEE 33rd International Workshop on Machine Learning for Signal Processing (MLSP)(2023)

Cited 0|Views32
No score
Abstract
Federated learning has shown great potential in improving the accuracy of models designed for connected autonomous vehicles (CAVs). However, existing approaches only focus on data collected by CAVs, ignoring the valuable insights provided by other types of clients, such as road-side units (RSUs). In this paper, we propose an approach that combines federated learning with cooperative perception to create a more comprehensive and robust global model. Our approach adopts a multi-layered structure that partitions CAVs and RSUs into local clusters. By incorporating the data captured by both CAVs and RSUs, this novel approach can lead to a more accurate and comprehensive global model that reflects the collective knowledge of all agents. We evaluate the proposed approach on the V2X-Set benchmark. The overall average precision of our approach using RSUs reaches 68.62% at Intersection-over-Union threshold of 0.5, significantly outperforming traditional CAV-based federated learning.
More
Translated text
Key words
Cooperative perception,federated learning,autonomous driving,vision transformer
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined