Guest Editorial: New Developments in Explainable and Interpretable Artificial Intelligence

K. P. Suba Subbalakshmi,Wojciech Samek,Xia Ben Hu

IEEE Transactions on Artificial Intelligence(2024)

Cited 0|Views12
No score
Abstract
This special issue brings together seven articles that address different aspects of explainable and interpretable artificial intelligence (AI). Over the years, machine learning (ML) and AI models have posted strong performance across several tasks. This has sparked interest in deploying these methods in critical applications like health and finance. However, to be deployable in the field, ML and AI models must be trustworthy. Explainable and interpretable AI are two areas of research that have become increasingly important to ensure trustworthiness and hence deployability of advanced AI and ML methods. Interpretable AI are models that obey some domain-specific constraints so that they are better understandable by humans. In essence, they are not black-box models. On the other hand, explainable AI refers to models and methods that are typically used to explain another black-box model.
More
Translated text
Key words
Explainable Artificial Intelligence,Guest Editorial,Convolutional Neural Network,Feature Space,Deep Reinforcement Learning,Graph Neural Networks,Artificial Intelligence Models,Artificial Intelligence Methods
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined