Generative Expressive Robot Behaviors using Large Language Models
CoRR(2024)
Abstract
People employ expressive behaviors to effectively communicate and coordinate
their actions with others, such as nodding to acknowledge a person glancing at
them or saying "excuse me" to pass people in a busy corridor. We would like
robots to also demonstrate expressive behaviors in human-robot interaction.
Prior work proposes rule-based methods that struggle to scale to new
communication modalities or social situations, while data-driven methods
require specialized datasets for each social situation the robot is used in. We
propose to leverage the rich social context available from large language
models (LLMs) and their ability to generate motion based on instructions or
user preferences, to generate expressive robot motion that is adaptable and
composable, building upon each other. Our approach utilizes few-shot
chain-of-thought prompting to translate human language instructions into
parametrized control code using the robot's available and learned skills.
Through user studies and simulation experiments, we demonstrate that our
approach produces behaviors that users found to be competent and easy to
understand. Supplementary material can be found at
https://generative-expressive-motion.github.io/.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined