Vision-language models for decoding provider attention during neonatal resuscitation
CoRR(2024)
Abstract
Neonatal resuscitations demand an exceptional level of attentiveness from
providers, who must process multiple streams of information simultaneously.
Gaze strongly influences decision making; thus, understanding where a provider
is looking during neonatal resuscitations could inform provider training,
enhance real-time decision support, and improve the design of delivery rooms
and neonatal intensive care units (NICUs). Current approaches to quantifying
neonatal providers' gaze rely on manual coding or simulations, which limit
scalability and utility. Here, we introduce an automated, real-time, deep
learning approach capable of decoding provider gaze into semantic classes
directly from first-person point-of-view videos recorded during live
resuscitations. Combining state-of-the-art, real-time segmentation with
vision-language models (CLIP), our low-shot pipeline attains 91%
classification accuracy in identifying gaze targets without training. Upon
fine-tuning, the performance of our gaze-guided vision transformer exceeds 98%
accuracy in gaze classification, approaching human-level precision. This
system, capable of real-time inference, enables objective quantification of
provider attention dynamics during live neonatal resuscitation. Our approach
offers a scalable solution that seamlessly integrates with existing
infrastructure for data-scarce gaze analysis, thereby offering new
opportunities for understanding and refining clinical decision making.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined