谷歌浏览器插件
订阅小程序
在清言上使用

A multi-modal garden dataset and hybrid 3D dense reconstruction framework based on panoramic stereo images for a trimming robot

Can Pu, Chuanyu Yang, Jinnian Pu, Radim Tylecek, Robert B. Fisher

CoRR(2023)

引用 0|浏览13
暂无评分
摘要
Recovering an outdoor environment's surface mesh is vital for an agricultural robot during task planning and remote visualization. Image-based dense 3D reconstruction is sensitive to large movements between adjacent frames and the quality of the estimated depth maps. Our proposed solution for these problems is based on a newly-designed panoramic stereo camera along with a hybrid novel software framework that consists of three fusion modules: disparity fusion, pose fusion, and volumetric fusion. The panoramic stereo camera with a pentagon shape consists of 5 stereo vision camera pairs to stream synchronized panoramic stereo images for the following three fusion modules. In the disparity fusion module, rectified stereo images produce the initial disparity maps using multiple stereo vision algorithms. Then, these initial disparity maps, along with the intensity images, are input into a disparity fusion network to produce refined disparity maps. Next, the refined disparity maps are converted into full-view (360 degrees) point clouds or single-view (72 degrees) point clouds for the pose fusion module. The pose fusion module adopts a two-stage global-coarse-to-local-fine strategy. In the first stage, each pair of full-view point clouds is registered by a global point cloud matching algorithm to estimate the transformation for a global pose graph's edge, which effectively implements loop closure. In the second stage, a local point cloud matching algorithm is used to match single-view point clouds in different nodes. Next, we locally refine the poses of all corresponding edges in the global pose graph using three proposed rules, thus constructing a refined pose graph. The refined pose graph is optimized to produce a global pose trajectory for volumetric fusion. In the volumetric fusion module, the global poses of all the nodes are used to integrate the single-view point clouds into the volume to produce the mesh of the whole garden. The proposed framework and its three fusion modules are tested on a real outdoor garden dataset to show the superiority of the performance. The whole pipeline takes about 4 min on a desktop computer to process the real garden dataset, which is available at: https://github.com/Canpu999/Trimbot-Wageningen-SLAM-Dataset.
更多
查看译文
关键词
3D reconstruction,Stereo vision,Disparity fusion,Pose graph optimization,Point cloud registration,Volumetric fusion
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要