Sensitivity to Risk Profiles of Users When Developing AI Systems.

Canadian Conference on AI(2020)

引用 0|浏览6
暂无评分
摘要
The AI community today has renewed concern about the social implications of the models they design, imagining the impact of deployed systems. One thrust has been to reflect on issues of fairness and explainability before the design process begins. There is increasing awareness as well of the need to engender trust from users, examining the origins of mistrust as well as the value of multiagent trust modelling solutions. In this paper, we argue that social AI efforts to date often imagine a homogenous user base and those models which do support differing solutions for users with different profiles have not yet examined one important consideration upon which trusted AI may depend: the risk profile of the user. We suggest how user risk attitudes can be integrated into approaches that try to reason about such dilemmas as sacrificing optimality for the sake of explainability. In the end, we reveal that it is challenging to be satisfying the myriad needs of users in their desire to be more comfortable accepting AI solutions and conclude that tradeoffs need to be examined and balanced. We advocate reasoning about these tradeoffs concerning user models and risk profiles, as we design the decision making algorithms of our systems.
更多
查看译文
关键词
Position paper, Trusted AI, Risk profiles, Explainability
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要