Guessing human intentions to avoid dangerous situations in caregiving robots
arxiv(2024)
摘要
For robots to interact socially, they must interpret human intentions and
anticipate their potential outcomes accurately. This is particularly important
for social robots designed for human care, which may face potentially dangerous
situations for people, such as unseen obstacles in their way, that should be
avoided. This paper explores the Artificial Theory of Mind (ATM) approach to
inferring and interpreting human intentions. We propose an algorithm that
detects risky situations for humans, selecting a robot action that removes the
danger in real time. We use the simulation-based approach to ATM and adopt the
'like-me' policy to assign intentions and actions to people. Using this
strategy, the robot can detect and act with a high rate of success under
time-constrained situations. The algorithm has been implemented as part of an
existing robotics cognitive architecture and tested in simulation scenarios.
Three experiments have been conducted to test the implementation's robustness,
precision and real-time response, including a simulated scenario, a
human-in-the-loop hybrid configuration and a real-world scenario.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要