Interpretable deep learning architectures for improving drug response prediction performance: myth or reality?

Bioinformatics(2023)

引用 1|浏览9
暂无评分
摘要
Motivation: Recent advances in deep learning model development have enabled more accurate prediction of drug response in cancer. However, the black-box nature of these models still remains a hurdle in their adoption for precision cancer medicine. Recent efforts have focused on making these models interpretable by incorporating signaling pathway information in model architecture. While these models improve interpretability, it is unclear whether this higher interpretability comes at the cost of less accurate predictions, or a prediction improvement can also be obtained. Results: In this study, we comprehensively and systematically assessed four state-of-the-art interpretable models developed for drug response prediction to answer this question using three pathway collections. Our results showed that models that explicitly incorporate pathway information in the form of a latent layer perform worse compared to models that incorporate this information implicitly. Moreover, in most evaluation setups the best performance is achieved using a simple black-box model. In addition, replacing the signaling pathways with randomly generated pathways shows a comparable performance for the majority of these interpretable models. Our results suggest that new interpretable models are necessary to improve the drug response prediction performance. In addition, the current study provides different baseline models and evaluation setups necessary for such new models to demonstrate their superior prediction performance. Availability and Implementation: Implementation of all methods are provided in [https://github.com/Emad-COMBINE-lab/InterpretableAI\_for\_DRP][1]. Generated uniform datasets are in . Contact: amin.emad{at}mcgill.ca Supplementary Information: Online-only supplementary data is available at the journal’s website. ### Competing Interest Statement The authors have declared no competing interest. [1]: https://github.com/Emad-COMBINE-lab/InterpretableAI_for_DRP
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要