Self-supervised robot self-modeling using a single egocentric camera

Research Square (Research Square)(2023)

Cited 0|Views9
No score
Abstract
Abstract The ability of robots to model their own dynamics is key to autonomous planning and learning, as well as for autonomous damage detection and recovery. Traditionally dynamic models are pre-programmed, or learned from external observations and IMU data. Here, we demonstrate for the first time how a task-agnostic dynamic self-model can be learned using only a single first-person-view camera in a self-supervised manner, without any prior knowledge of robot morphology, kinematics, or task. We trained an egocentric visual self-model using random motor babbling on a 12-DoF robot. We then show how the robot can leverage its visual self-model to achieve various locomotion tasks, such as moving forward, backward and turning, all without any additional physical training. The accuracy of the egocentric model exceeds that of a model trained using an IMU. We also show how a robot can automatically detect and recover from damage. We suggest that self-supervised egocentric visual self-modeling could allow complex systems to continuously model themselves without additional sensors and prior knowledge.
More
Translated text
Key words
self-supervised self-modeling
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined