The Johns Hopkins University Multimodal Dataset For Human Action Recognition

RADAR SENSOR TECHNOLOGY XIX; AND ACTIVE AND PASSIVE SIGNATURES VI(2015)

引用 3|浏览7
暂无评分
摘要
The Johns Hopkins University Multi Modal Action (JHUMMA) dataset contains a set of twenty-one actions recorded with four sensor systems in three different modalities. The data was collected with a data acquisition system that includes three independent active sonar devices at three different frequencies and a Microsoft Kinect sensor that provides both RGB and Depth data. We have developed algorithms for human action recognition from active acoustics and provide benchmark baseline recognition performance results.
更多
查看译文
关键词
active acoustics, human action recognition, micro-Doppler effect, multimodal action dataset, multistatic sonar, micro-Doppler modulations
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要