VidAF: A Motion-Robust Model for Screening Atrial Fibrillation from Facial Videos.

IEEE Journal of Biomedical and Health Informatics(2021)

引用 4|浏览0
暂无评分
摘要
Atrial fibrillation (AF) is the most common arrhythmia, but an estimated 30% of patients with AF are unaware of their conditions. The purpose of this work is to design a model for AF screening from facial videos, with a focus on addressing typical motion disturbances in our real life, such as head movements and expression changes. This model detects a pulse signal from the skin color changes in a facial video by a convolution neural network, incorporating a phase-driven attention mechanism to suppress motion signals in the space domain. It then encodes the pulse signal into discriminative features for AF classification by a coding neural network, using a de-noise coding strategy to improve the robustness of the features to motion signals in the time domain. The proposed model was tested on a dataset containing 1200 samples of 100 AF patients and 100 non-AF subjects. Experimental results demonstrated that VidAF had significant robustness to facial motions, predicting clean pulse signals with the mean absolute error of inter-pulse intervals less than 100 milliseconds. Besides, the model achieved promising performance in AF identification, showing an accuracy of more than 90% in multiple challenging scenarios. VidAF provides a more convenient and cost-effective approach for opportunistic AF screening in the community.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要