Chrome Extension
WeChat Mini Program
Use on ChatGLM

A Multi-modal Framework for Robots to Learn Manipulation Tasks from Human Demonstrations

J. Intell. Robotic Syst.(2023)

Cited 0|Views7
No score
Abstract
Enabling robots to learn manipulation tasks by observing human demonstrations remains a major challenge. Recent advances in video captioning tasks provide an end-to-end method to translate demonstration videos into robotic commands. Compared with general video captioning tasks, Video to Command (V2C) task faces two key challenges: (1) How to extract key frames containing fine-grained manipulation actions from demonstration videos that contain a large amount of redundant information; (2) How to significantly improve the accuracy of generated commands so that the V2C method can be applied to real robot tasks. In response to the above problems, we propose a multi-modal framework for robots to learn manipulation tasks from human demonstrations. This framework consists of five components: Text Encoder, Video Encoder, Action Classifier, Keyframe Aligner and Command Decoder. In this framework, we have mainly done two aspects of work: (1) The key frame information of the video is extracted, and the effect of key frame information on improving the translation accuracy of robot commands is analyzed; (2) Based on the video and caption text information, we explore the effect of multimodal information fusion on improving the accuracy of the command generated by the model. Experiments show that our model is significantly superior to the existing methods on the standard metrics of video captioning tasks, such as BLEU_N, METEOR, ROUGE_L, and CIDEr. Among them, the performance of the variant model CGM-V using only video information on BLEU_4 is increased by 0.8%, and that of the variant model CGM-M using multi-modal information on BLEU_4 is significantly increased by 43.7%. Furthermore, our framework, when combined with an affordance detection network and a motion planner, can enable the robot to reproduce the tasks in the demonstration. Our source code and expanded annotations for the IIT-V2C dataset are at https://github.com/yin0816/CGM-M .
More
Translated text
Key words
Multi-modal, Video captioning, Video to Command, Learn from demonstration
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined