谷歌浏览器插件
订阅小程序
在清言上使用

Autonomous navigation method based on RGB‐D camera for a crop phenotyping robot

Meng Yang,Chenglong Huang, Zhengda Li, Yang Shao,Jinzhan Yuan,Wanneng Yang,Peng Song

Journal of Field Robotics(2024)

引用 0|浏览2
暂无评分
摘要
Phenotyping robots have the potential to obtain crop phenotypic traits on a large scale with high throughput. Autonomous navigation technology for phenotyping robots can significantly improve the efficiency of phenotypic traits collection. This study developed an autonomous navigation method utilizing an RGB‐D camera, specifically designed for phenotyping robots in field environments. The PP‐LiteSeg semantic segmentation model was employed due to its real‐time and accurate segmentation capabilities, enabling the distinction of crop areas in images captured by the RGB‐D camera. Navigation feature points were extracted from these segmented areas, with their three‐dimensional coordinates determined from pixel and depth information, facilitating the computation of angle deviation (α) and lateral deviation (d). Fuzzy controllers were designed with α and d as inputs for real‐time deviation correction during the walking of phenotyping robot. Additionally, the method includes end‐of‐row recognition and row spacing calculation, based on both visible and depth data, enabling automatic turning and row transition. The experimental results showed that the adopted PP‐LiteSeg semantic segmentation model had a testing accuracy of 95.379% and a mean intersection over union of 90.615%. The robot's navigation demonstrated an average walking deviation of 1.33 cm, with a maximum of 3.82 cm. Additionally, the average error in row spacing measurement was 2.71 cm, while the success rate of row transition at the end of the row was 100%. These findings indicate that the proposed method provides effective support for the autonomous operation of phenotyping robots.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要