An Interpretable Neonatal Lung Ultrasound Feature Extraction and Lung Sliding Detection System Using Object Detectors

IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE(2024)

引用 0|浏览4
暂无评分
摘要
The objective of this study was to develop an interpretable system that could detect specific lung features in neonates. A challenging aspect of this work was that normal lungs showed the same visual features (as that of Pneumothorax (PTX)). M-mode is typically necessary to differentiate between the two cases, but its generation in clinics is time-consuming and requires expertise for interpretation, which remains limited. Therefore, our system automates M-mode generation by extracting Regions of Interest (ROIs) without human in the loop. Object detection models such as faster Region Based Convolutional Neural Network (fRCNN) and RetinaNet models were employed to detect seven common Lung Ultrasound (LUS) features. fRCNN predictions were then stored and further used to generate M-modes. Beyond static feature extraction, we used a Hough transform based statistical method to detect "lung sliding" in these M-modes. Results showed that fRCNN achieved a greater mean Average Precision (mAP) of 86.57% (Intersection-over-Union (IoU) = 0.2) than RetinaNet, which only displayed a mAP of 61.15%. The calculated accuracy for the generated RoIs was 97.59% for Normal videos and 96.37% for PTX videos. Using this system, we successfully classified 5 PTX and 6 Normal video cases with 100% accuracy. Automating the process of detecting seven prominent LUS features addresses the time-consuming manual evaluation of Lung ultrasound in a fast paced environment. Clinical impact: Our research work provides a significant clinical impact as it provides a more accurate and efficient method for diagnosing lung diseases in neonates.
更多
查看译文
关键词
Lung ultrasound,object detection models,faster RCNN,RetinaNet,Hough transform,M-mode,automatic lung sliding detection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要