VMV-GCN: Volumetric Multi-View Based Graph CNN for Event Stream Classification

IEEE ROBOTICS AND AUTOMATION LETTERS(2022)

引用 5|浏览25
暂无评分
摘要
Event cameras can perceive pixel-level brightness changes to output asynchronous event streams, and have notable advantages in high temporal resolution, high dynamic range and low power consumption for challenging vision tasks. To apply existing learning models on event data, many researchers integrate sparse events into dense frame-based representations which can work with convolutional neural networks directly. Although these works achieve high performance on event-based classification, their models need lots of parameters to process dense event frames which do not fit with the sparsity of event data. To utilize the sparse nature of events, we propose a voxel-wise graph learning model (VMV-GCN) for spatio-temporal feature learning on event streams. Specifically, we design the volumetric multi-view fusion module (VMVF) to extract spatial and temporal information from views of voxelized event data. Then we take representative event voxels as vertices and use a novel dual-graph construction strategy to connect them. By aggregating neighborhood information based on relationships of vertices, the proposed dynamic neighborhood feature learning module (DNFL) can capture discriminative spatio-temporal features on dynamically updated graphs. Experiments show that our method achieves state-of-the-art performance with low model complexity on event-based classification tasks, such as object classification and action recognition.
更多
查看译文
关键词
Deep learning for visual perception, object detection, segmentation and categorization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要