An Effective View And Time-Invariant Action Recognition Method Based On Depth Videos

2015 Visual Communications and Image Processing (VCIP)(2015)

引用 6|浏览15
暂无评分
摘要
Little progress has been achieved in hand-crafted feature based human action recognition (HAR) for RGB videos in recent years. The emergence of low price depth camera presents more information for action recognition. Compared to RGB videos, depth video sequences are more insensitive to light changes and more discriminative in many vision tasks such as segmentation and activity recognition. In this paper, we propose an effective and straightforward HAR method by using skeleton joints information of the depth sequence. First, we calculate three feature vectors which capture angle and position information between joints. Then, the obtained vectors are used as the inputs of three separate support vector machine (SVM) classifiers. Finally, the action recognition is conducted by fusing the SVM classification results. Our features are view-invariant because the extracted vectors contain only angle and normalized position information based on joint coordinates. By normalizing action videos with different temporal lengths to a fixed size using interpolation, the extracted features have the same dimension for different videos and can still keep the principal movement patterns which make the proposed method time-invariant. Experimental results demonstrate that our method performs comparable results on the UTKinect-Action3D dataset, and is more efficient and simpler than state-of-the-art methods.
更多
查看译文
关键词
UTKinect-Action3D dataset,SVM classifiers,support vector machine classifiers,activity recognition,depth video sequences,RGB videos,HAR method,hand crafted feature based human action recognition,time invariant action recognition method
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要