Chrome Extension
WeChat Mini Program
Use on ChatGLM

Context-Aware Memory Attention Network for Video-Based Action Recognition

2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP)(2022)

Cited 0|Views5
No score
Abstract
Human action recognition is very popularly researched in the computer vision community. The current challenge is to render it efficient enough for wide deployment. In this paper, we propose a human action recognition model which does not require optical flow extraction and 3D convolution, called Context-Aware Memory Attention Network (CAMA-Net). It consists of an attention module called Context-Aware Memory Attention Module which is used to calculate the relevance score between the key and value pairs from the backbone output. The proposed method is evaluated and tested on popular public action recognition datasets, UCF101 and HMDB51. The results demonstrate the strength of our proposed model as it outperforms existing baseline models.
More
Translated text
Key words
Action Recognition,Deep Learning,Convolutional Neural Network,Attention
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined