Fairness and bias correction in machine learning for depression prediction across four study populations.

Vien Ngoc Dang, Anna Cascarano,Rosa H Mulder, Charlotte Cecil,Maria A Zuluaga, Jerónimo Hernández-González,Karim Lekadir

Scientific reports(2024)

引用 0|浏览1
暂无评分
摘要
A significant level of stigma and inequality exists in mental healthcare, especially in under-served populations. Inequalities are reflected in the data collected for scientific purposes. When not properly accounted for, machine learning (ML) models learned from data can reinforce these structural inequalities or biases. Here, we present a systematic study of bias in ML models designed to predict depression in four different case studies covering different countries and populations. We find that standard ML approaches regularly present biased behaviors. We also show that mitigation techniques, both standard and our own post-hoc method, can be effective in reducing the level of unfair bias. There is no one best ML model for depression prediction that provides equality of outcomes. This emphasizes the importance of analyzing fairness during model selection and transparent reporting about the impact of debiasing interventions. Finally, we also identify positive habits and open challenges that practitioners could follow to enhance fairness in their models.
更多
查看译文
关键词
Machine learning for depression prediction,Algorithmic fairness,Bias mitigation,Novel post-hoc method,Psychiatric healthcare equity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要