Chrome Extension
WeChat Mini Program
Use on ChatGLM

Monocular Keypoint based Pull-ups Measurement on Strict Pull-ups Benchmark.

CSSE(2021)

Cited 1|Views1
No score
Abstract
Pull-up action is one of the critical standards and items for measuring personal physique. The traditional manual assessment method is subjective and inefficient, and the existing automatic methods require high demands for playgrounds and equipment. In this paper, we propose monocular vision-based pull-ups measurement with human keypoint estimation on our proposed strict pull-ups benchmark. Specifically, the deep neural network HRNet is employed to estimate the human keypoints frame by frame. The face keypoints are estimated with Dlib on the facial region, meanwhile, the position of the horizontal bar is estimated with Canny edge detection and Hough transform on the hand region. After keypoint smoothing and denoising with the Savitzky-Golay filter, both the valid actions and invalid actions are recognized by our proposed algorithm. On the strict pull-up video dataset we collected, this proposed method achieved 91.5% of the average counting accuracy. The dataset and code will soon be released.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined