Modelling Human Trust in Robots During Repeated Interactions

PROCEEDINGS OF THE 11TH CONFERENCE ON HUMAN-AGENT INTERACTION, HAI 2023(2023)

Cited 0|Views7
No score
Abstract
Modelling humans' trust in robots is critical during human-robot interaction (HRI) to avoid under- or over-reliance on robots. Currently, it is challenging to calibrate trust in real-time. Consequently, we see limited work on calibrating humans' trust in robots in HRI. In this paper we describe a mathematical model that attempts to emulate the three-layered (initial, situational, learned) framework of trust capable of potentially estimating humans' trust in robots in real-time. We evaluated the trust model in an experimental setup that involved participants playing a trust game on four occasions. We validate the model based on linear regression analysis that showed that the trust perception score (TPS) and interaction session predicted the trust modelled score (TMS) computed by applying the trust model. We also show that TPS and TMS did not change significantly from the second to the fourth session. However, TPS and TMS captured in the last session increased significantly from the first session. The described work is an initial effort to model three layers of humans' trust in robot in a repeated HRI setup and requires further testing and extension to improve its robustness across settings.
More
Translated text
Key words
Trust,Measurement,Repeated Interactions,Human-Robot Interaction
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined