Everything is varied: The surprising impact of instantial variation on ML reliability

Applied Soft Computing(2023)

引用 1|浏览11
暂无评分
摘要
Instantial variation (IV) refers to variation that is due not to population differences or errors, but rather to within-subject variation, that is the intrinsic and characteristic patterns of variation pertaining to a given instance or the measurement process. Although taking into account IV is critical for the proper analysis of the results, this source of uncertainty and its impact on robustness have so far been neglected in Machine Learning (ML). To fill this gap, we look at how IV affects ML performance and generalization, and how its impact can be mitigated. Specifically, we provide a methodological contribution to formalize the problem of IV in the statistical learning framework. To prove the relevance of our contribution, we focus on one of the most critical domains, healthcare, and take individual (analytical and biological) variation as a specific kind of IV; in this domain, we use one of the largest real-world laboratory medicine datasets for the task of COVID-19 detection, to show that: (1) common state-of-the-art ML models are severely impacted by the presence of IV in data; and (2) advanced learning strategies, based on data augmentation and soft computing methods (data imprecisiation), and proper study designs can be effective at improving robustness to IV. Our findings demonstrate the critical relevance of correctly accounting for IV to enable safe deployment of ML in real-world settings.
更多
查看译文
关键词
reliability,instantial variation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要