Temporal Enhanced Multi-Stream Graph Convolutional Nerual Networks For Skeleton-Based Action Recognition

2021 China Automation Congress (CAC)(2021)

引用 1|浏览0
暂无评分
摘要
Compared with image sequence, skeleton sequence is an ideal choice for human action recognition because of no redundant information and lightweight data. Recently,a tremendous breakthrough in skeleton-based human activity recognition methods. By way of illustration, Spatial- Temporal Graph Convolutional Networks (ST-GCN) creatively distill the information of human joints into the structure of a graph and Two-Stream Adaptive Graph Convolutional Networks (2S-AGCN) propose to use the length and orientation information of bones and combine it with joint information to predict in an explicit way. However there are still some issues exist in these GCN-based models. The model lacks long-term dependency modeling capabilities and does not explore the deep correlation between joints and bones. In this work, we propose a temporal enhanced multi-stream graph convolutional nerual networks (TAMS-GCN) for skeleton-based action recognition.We combine the temporal attention module with a graph convolutional neural network to extract skeletal information of each frame and actively incorporates it into the global features by global pooling of all joints for each frame to acquire attention at the frame level of the action. In addition we propose a fusion network of joint and bone information that implicitly learns the connection between joints and bones, increasing the compactness between joints and bones. We tested our TAMS-GCN model on NTU-RGBD datasets, the model achieves excellent performance compared to the state-of-the-art.
更多
查看译文
关键词
skeleton-based human activity recognition,graph convolutional network,temporal enhanced,joint and bone fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要