Narrowing the gap: expected versus deployment performance

Journal of the American Medical Informatics Association(2023)

引用 1|浏览9
暂无评分
摘要
Abstract Objectives Successful model development requires both an accurate a priori understanding of future performance and high performance on deployment. Optimistic estimations of model performance that are unrealized in real-world clinical settings can contribute to nonuse of predictive models. This study used 2 tasks, predicting ICU mortality and Bi-Level Positive Airway Pressure failure, to quantify: (1) how well internal test performances derived from different methods of partitioning data into development and test sets estimate future deployment performance of Recurrent Neural Network models and (2) the effects of including older data in the training set on models’ performance. Materials and Methods The cohort consisted of patients admitted between 2010 and 2020 to the Pediatric Intensive Care Unit of a large quaternary children’s hospital. 2010–2018 data were partitioned into different development and test sets to measure internal test performance. Deployable models were trained on 2010–2018 data and assessed on 2019–2020 data, which was conceptualized to represent a real-world deployment scenario. Optimism, defined as the overestimation of the deployed performance by internal test performance, was measured. Performances of deployable models were also compared with each other to quantify the effect of including older data during training. Results, Discussion, and Conclusion Longitudinal partitioning methods, where models are tested on newer data than the development set, yielded the least optimism. Including older years in the training dataset did not degrade deployable model performance. Using all available data for model development fully leveraged longitudinal partitioning by measuring year-to-year performance.
更多
查看译文
关键词
deployment performance,gap
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要