A Data-Driven Approach for the Localization of Interacting Agents via a Multi-Modal Dynamic Bayesian Network Framework

2022 18th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)(2022)

引用 1|浏览5
暂无评分
摘要
This paper proposes a multi-modal situational inter-action model for collaborative agents by fusing multi-sensorial information in a Multi-Agent Hierarchical Dynamic Bayesian Network (MAH-DBN) framework. The proposed model is learned in a data-driven methodology to estimate the states of interacting agents only from video sequences. This can be regarded as a two-fold methodology for improving visual-based localization and interaction between autonomous agents. In the learning stage, the odometry model is used to drive the video learning model for a robust localization and interaction modeling. During the testing phase, the learned Multi-Agent Hierarchical DBN (MAH-DBN) model is used for the localization of collaborative agents only from video sequences by proposing an inference method called Multi-Agent Coupled Markov Jump Particle Filter (MAC-MJPF).
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要