Human-Centered Explainable AI (HCXAI): Reloading Explainability in the Era of Large Language Models (LLMs)

CHI EA '24: Extended Abstracts of the CHI Conference on Human Factors in Computing Systems(2024)

Cited 0|Views4
No score
Abstract
Human-centered XAI (HCXAI) advocates that algorithmic transparency alone is not sufficient for making AI explainable. Explainability of AI is more than just “opening” the black box — who opens it matters just as much, if not more, as the ways of opening it. In the era of Large Language Models (LLMs), is “opening the black box” still a realistic goal for XAI? In this fourth CHI workshop on Human-centered XAI (HCXAI), we build on the maturation through the previous three installments to craft the coming-of-age story of HCXAI in the era of Large Language Models (LLMs). We aim towards actionable interventions that recognize both affordances and pitfalls of XAI. The goal of the fourth installment is to question how XAI assumptions fare in the era of LLMs and examine how human-centered perspectives can be operationalized at the conceptual, methodological, and technical levels. Encouraging holistic (historical, sociological, and technical) approaches, we emphasize “operationalizing.” We seek actionable analysis frameworks, concrete design guidelines, transferable evaluation methods, and principles for accountability.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined