The wall of safety for AI - approaches in the Confiance.ai program.

SafeAI@AAAI(2022)

Cited 5|Views21
No score
Abstract
AI faces some « walls » towards which it is advancing at high pace. Apart from social and ethics consideration, there are walls on several subjects very dependent but gathering each some considerations from AI community, both for use, design and research: trust, safety, security, energy, human-machine cooperation, and « inhumanity ». Safety questions are an particularly important subjects for all of them. The Confiance.ai industrial program aims at solving some of these issues by developing seven interrelated projects that address these aspects from different viewpoints and integrate them in an engineering environment for AI-based systems. We will present the concrete approach taken by confiance.ai and the validation strategy based on real-world industrial use cases provided by the members. The walls of AI and their relation with safety Artificial intelligence is advancing at a very fast pace, both in terms of research and applications, and is raising societal questions that are far from being answered. But as it moves forward rapidly, it runs into what we call the five walls of AI, walls that it is likely to crash into if we don't take precautions. Any one of these five walls is capable of halting its progress, which is why it is essential to know what they are and to seek answers in order to avoid the so-called third winter of AI, a winter that would follow the first two in the years 197x and 199x, during which AI research and development came to a virtual standstill for lack of budget and community interest. The five walls are those of trust, energy, safety, human interaction and inhumanity. They each contain a number of ramifications, and obviously interact. Copyright © 2022 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). There are different opinions on this matter. The paper (Bengio et al. 2021) by Yoshua Bengio, Yann LeCun and Geoffrey Hinton, written after their collective Turing Award, provides insights into the future of AI through deep learning and neural networks without addressing the same topics; the 2021 progress report of Stanford's 100-year longitudinal study (Littman et al. 2021) examines AI advances to date and presents challenges for the future, very complementary to those we discuss here; the recent book by César Hidalgo (2021) looks at how humans perceive AI (and machines); the book "Human Compatible" by Stuart Russell (2019), is interested in the compatibility between machines and humans, a subject we treat differently when we talk about the interaction wall.
More
Translated text
Key words
safety,ai,program,wall,approaches
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined