谷歌Chrome浏览器插件
订阅小程序
在清言上使用

Using Semi-Automated Annotation and Optical Character Recognition for Transcription of Patient Monitors Using Smartphone Camera.

Jan Federico Coscolluela IV, Marbert John Chang Marasigan, Joel Macalino, Miguel Aljibe,Alvin Marcelo

International Conference on Digital Medicine and Image Processing(2023)

引用 0|浏览3
暂无评分
摘要
Vital signs monitoring is a key function in healthcare delivery to ensure immediate and precise evaluation of a patient's well-being. It is done by attaching monitor devices to patients which collect, store, and display values on a screen. In many low-to-medium income countries (LMICs), hospitals still rely on manual observation and handwritten documentation of vital signs, which is susceptible to human errors, data tampering, process inefficiency, and limited opportunities for comprehensive data analysis. More advanced hospitals utilize interface engines which transmit data to electronic medical records but tend to be model-specific and are very costly. This paper proposes a cost-effective and non-invasive alternative to digitizing vital signs data in healthcare settings with low financial resources using optical character recognition (OCR). A contour-based screen extraction procedure is implemented to isolate the patient monitor based on edge visibility, allowing for flexibility in extraction of a well-defined region across different monitor models. An object detection model is then trained to localize the vital signs followed by data extraction using OCR. The study offers a newly accrued dataset of over 4000 images of the Mindray Beneview T8 patient monitor with multi-parameter annotations. Results showed that screen extraction prior to object detection significantly improved its mean Average Precision (mAP) from 68.55% to 93.65% at an IoU threshold of 0.7.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要