Predicting Opioid Use Outcomes in Minoritized Communities

Abhay Goyal,Nimay Parekh, Lam Yin Cheung,Koustuv Saha,Frederick L. Altice,Robin O'Hanlon, Roger Ho Chun Man,Chunki Fong, Christian Poellabauer,Honoria Guarino, Pedro Mateu Gelabert,Navin Kumar

14TH ACM CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY, AND HEALTH INFORMATICS, BCB 2023(2023)

引用 0|浏览0
暂无评分
摘要
Within the healthcare space, machine learning algorithms can sometimes exacerbate racial, ethnic, and gender disparities, among others. Many machine learning algorithms are trained on data from majority populations, thereby generating less accurate or reliable results for minoritized groups [3]. For example, in a widely used algorithm, at a given risk score, the technique falsely concludes that Black individuals are healthier than equally sick White individuals [6]. Thus, such large-scale algorithms can often perpetuate biases. There has been limited work at exploring potential biases in algorithms deployed within minoritized communities. In particular, minimal research has detailed how biases may manifest in algorithms developed by insurance companies to predict opioid use outcomes, or opioid overdoses among people who use opioids in urban areas. An algorithm trained on data from white individuals may provide incorrect estimates for Hispanic/Latino individuals, perhaps resulting in adverse health outcomes. Since predicting opioid use outcomes is important to improving health in populations often neglected by larger health systems [4], our goal is to examine how machine learning algorithms perform at determining opioid use outcomes within minoritized communities. As a case study, we used data from a sample of 539 young adults who engaged in nonmedical use of prescription opioids and/or heroin [5]. The prevalence and incidence of opioid use has increased rapidly in the US in the past two decades, which is related to concomitant increases in opioid dependence, accidental overdose and death. We addressed the indicated issues through the following contributions: 1) Using machine learning techniques, we predicted opioid use outcomes for participants in our dataset; 2) We assessed if an algorithm trained on a majority sub-sample e.g., Non-Hispanic/Latino, male, could accurately predict opioid use outcomes for a minoritized subsample e.g., Latino, female. Our analysis was conducted to replicate possible real-world scenarios, and provide insight on how to improve broad health outcomes via predictive modeling. For example, if an insurance company primarily caters to Non-Hispanic/Latino individuals, models trained on data from Non-Hispanic/Latino individuals may not predict life insurance costing accurately for Hispanic individuals seeking treatment, and our analysis can provide understanding into such scenarios. Results indicated that models were able to predict recent injection drug use and participation in drug treatment. The presence of peers who also engaged in opioid use appeared to play a role in predicting drug treatment and injection drug use. However, the available data lacked comprehensive information on other facets of opioid use, such as harm reduction. We noted a decrease in precision when we trained our models on only data from a majority sub-sample, and tested these models on a minoritized sub-sample. Overall, machine learning approaches are only as precise and useful as the data they are trained on, and to make valid and accurate predictions they must be trained on data from people who are similar in terms of key sociodemographic characteristics as the populations about whom predictions will be made. Key to mitigating biases in models to predict health outcomes within minoritized communities, is the inclusion of stakeholders at every stage of the machine learning operations (MLOps) pipeline. For example, methadone patients need to be involved in the development of models to predict methadone dropout risk [1, 2]. Similarly, a committee of ethnic minority individuals can be involved in auditing algorithms used to detect cardiovascular risk. Insurance companies and other stakeholders who use machine learning to predict opioid use outcomes need to be aware that models can exacerbate biases, and seek to improve their predictive modelling capabilities. Insurance companies that have primarily white individuals in their datasets should seek to augment their datasets with individuals from minoritized backgrounds. Such practices can aid providers in making accurate predictions if their client demographics shift, or if nonwhite individuals seek treatment. There increasingly exist independent corporations that audit large scale machine learning models, and such corporations need to ensure that minoritized communities are adequately represented within the audit committee.
更多
查看译文
关键词
Opioid Use,Bias,Marginalization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要