Captum: A unified and generic model interpretability library for PyTorch

Narine Kokhlikyan,Vivek Miglani,Miguel Martin,Edward Wang,Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina,Carlos Araya, Siqi Yan,Orion Reblitz-Richardson

arxiv(2020)

引用 93|浏览19
暂无评分
摘要
In this paper we introduce a novel, unified, open-source model interpretability library for PyTorch [12]. The library contains generic implementations of a number of gradient and perturbation-based attribution algorithms, also known as feature, neuron and layer importance algorithms, as well as a set of evaluation metrics for these algorithms. It can be used for both classification and non-classification models including graph-structured models built on Neural Networks (NN). In this paper we give a high-level overview of supported attribution algorithms and show how to perform memory-efficient and scalable computations. We emphasize that the three main characteristics of the library are multimodality, extensibility and ease of use. Multimodality supports different modality of inputs such as image, text, audio or video. Extensibility allows adding new algorithms and features. The library is also designed for easy understanding and use. Besides, we also introduce an interactive visualization tool called Captum Insights that is built on top of Captum library and allows sample-based model debugging and visualization using feature importance metrics.
更多
查看译文
关键词
generic model interpretability library,pytorch
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要