Chrome Extension
WeChat Mini Program
Use on ChatGLM

Multi-Modal Multi-Channel American Sign Language Recognition.

Int. J. Artif. Intell. Robotics Res.(2024)

Cited 0|Views44
No score
Abstract
In this paper, we propose a 3D Convolutional Neural Network (3DCNN) based multi-stream framework to recognize American Sign Language (ASL) manual signs and non-manual gestures (face and head movements) in real-time from RGB-D videos. Our approach is based on fusing multimodal features including hand gestures, facial expressions, and body poses from multiple channels (RGB, depth, motion, and skeleton joints). To learn the overall temporal dynamics in a video, a proxy video is generated by selecting a subset of frames for each video which are then used to train the proposed 3DCNN model. Our proposed method achieves 92.88 % accuracy for recognizing 100 ASL sign glosses in our newly collected ASL-100-RGBD dataset. The effectiveness of our framework for recognizing hand gestures from RGB-D videos is further demonstrated on a large-scale dataset, Chalearn IsoGD, achieving the state-of-the-art results.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined