Synergizing Natural Language Towards Enhanced Shared Autonomy.

Shalutha Rajapakshe, Atharva Dastenavar, André Schakkal,Emmanuel Senft

IEEE/ACM International Conference on Human-Robot Interaction(2024)

Cited 0|Views3
No score
Abstract
Shared autonomy can be beneficial in allowing users of assistive robots to refine the robot's behavior and ensure it is adapted to their needs. However, current methodologies mostly focus on using joysticks or physical pushes to modify robots' trajectories, which may not be feasible for people with reduced mobility. In this paper, we present our initial work toward voice-based shared autonomy, implementing a language model which can use sequences of verbal commands to understand the intended correction direction. Our fine-tuned model, capable of running locally on a CPU, shows improved efficiency in complex and realistic sentences compared to recent generative pre-trained transformer (GPT) models.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined