SEVD: Synthetic Event-based Vision Dataset for Ego and Fixed Traffic Perception
CoRR(2024)
Abstract
Recently, event-based vision sensors have gained attention for autonomous
driving applications, as conventional RGB cameras face limitations in handling
challenging dynamic conditions. However, the availability of real-world and
synthetic event-based vision datasets remains limited. In response to this gap,
we present SEVD, a first-of-its-kind multi-view ego, and fixed perception
synthetic event-based dataset using multiple dynamic vision sensors within the
CARLA simulator. Data sequences are recorded across diverse lighting (noon,
nighttime, twilight) and weather conditions (clear, cloudy, wet, rainy, foggy)
with domain shifts (discrete and continuous). SEVD spans urban, suburban,
rural, and highway scenes featuring various classes of objects (car, truck,
van, bicycle, motorcycle, and pedestrian). Alongside event data, SEVD includes
RGB imagery, depth maps, optical flow, semantic, and instance segmentation,
facilitating a comprehensive understanding of the scene. Furthermore, we
evaluate the dataset using state-of-the-art event-based (RED, RVT) and
frame-based (YOLOv8) methods for traffic participant detection tasks and
provide baseline benchmarks for assessment. Additionally, we conduct
experiments to assess the synthetic event-based dataset's generalization
capabilities. The dataset is available at
https://eventbasedvision.github.io/SEVD
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined