Differential Recurrent Neural Networks for Action Recognition

2015 IEEE International Conference on Computer Vision (ICCV)(2015)

引用 582|浏览162
暂无评分
摘要
The long short-term memory (LSTM) neural network is capable of processing complex sequential information since it utilizes special gating schemes for learning representations from long input sequences. It has the potential to model any sequential time-series data, where the current hidden state has to be considered in the context of the past hidden states. This property makes LSTM an ideal choice to learn the complex dynamics of various actions. Unfortunately, the conventional LSTMs do not consider the impact of spatio-temporal dynamics corresponding to the given salient motion patterns, when they gate the information that ought to be memorized through time. To address this problem, we propose a differential gating scheme for the LSTM neural network, which emphasizes on the change in information gain caused by the salient motions between the successive frames. This change in information gain is quantified by Derivative of States (DoS), and thus the proposed LSTM model is termed as differential Recurrent Neural Network (dRNN). We demonstrate the effectiveness of the proposed model by automatically recognizing actions from the real-world 2D and 3D human action datasets. Our study is one of the first works towards demonstrating the potential of learning complex time-series representations via high-order derivatives of states.
更多
查看译文
关键词
differential recurrent neural networks,action recognition,long short-term memory neural network,LSTM neural network,complex sequential information processing,long input sequences,sequential time-series data,complex dynamics learning,spatiotemporal dynamics,salient motion patterns,differential gating,information gain,derivative of states,DoS,dRNN,2D human action datasets,3D human action datasets,complex time-series representation learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要