PupilScreen

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies(2017)

引用 34|浏览3
暂无评分
摘要
Before a person suffering from a traumatic brain injury reaches a medical facility, measuring their pupillary light reflex (PLR) is one of the few quantitative measures a clinician can use to predict their outcome. We propose PupilScreen, a smartphone app and accompanying 3D-printed box that combines the repeatability, accuracy, and precision of a clinical device with the ubiquity and convenience of the penlight test that clinicians regularly use in emergency situations. The PupilScreen app stimulates the patient's eyes using the smartphone's flash and records the response using the camera. The PupilScreen box, akin to a head-mounted virtual reality display, controls the eyes' exposure to light. The recorded video is processed using convolutional neural networks that track the pupil diameter over time, allowing for the derivation of clinically relevant measures. We tested two different network architectures and found that a fully convolutional neural network was able to track pupil diameter with a median error of 0.30 mm. We also conducted a pilot clinical evaluation with six patients who had suffered a TBI and found that clinicians were almost perfect when separating unhealthy pupillary light reflexes from healthy ones using PupilScreen alone.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要