NeRF2Real: Sim2real Transfer of Vision-guided Bipedal Motion Skills using Neural Radiance Fields

CoRR(2023)

Cited 13|Views118
No score
Abstract
We present a system for applying sim2real approaches to “in the wild” scenes with realistic visuals, and to policies which rely on active perception using RGB cameras. Given a short video of a static scene collected using a generic phone, we learn the scene's contact geometry and a function for novel view synthesis using a Neural Radiance Field (NeRF). We augment the NeRF rendering of the static scene by overlaying the rendering of other dynamic objects (e.g. the robot's own body, a ball). A simulation is then created using the rendering engine in a physics simulator which computes contact dynamics from the static scene geometry (estimated from the NeRF vol-ume density) and the dynamic objects' geometry and physical properties (assumed known). We demonstrate that we can use this simulation to learn vision-based whole body navigation and ball pushing policies for a 20 degree-of-freedom humanoid robot with an actuated head-mounted RGB camera, and we successfully transfer these policies to a real robot.
More
Translated text
Key words
active perception,actuated head-mounted RGB camera,contact dynamics,dynamic objects,generic phone,NeRF rendering,NeRF vol-ume density,NeRF2real,Neural Radiance Field,Neural Radiance fields,physics simulator,realistic visuals,rendering engine,RGB cameras,short video,sim2real approaches,sim2real transfer,static scene geometry,view synthesis,vision-based whole body navigation,vision-guided bipedal motion skills,wild scenes
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined