Training a Vision-Based Autonomous Robot From Material Bending Analysis to Deformation Variables Predictions With an XR Approach

CAADRIA proceedings Proceedings of the 27th Conference on Computer Aided Architectural Design Research in Asia (CAADRIA) [Volume 2](2022)

引用 0|浏览0
暂无评分
摘要
This paper proposes a "Human Aided Hand-Eye System (HAHES)" to aid the autonomous robot for "Digital Twin Model (DTM)" sampling and correction. HAHES combining the eye-to hand and eye-in hand relationship to build an online DTM datasets. Users can download data and inspect DTM by "Human Wearable XR Device (HWD)", then continuous updating DTM by back testing the probing depth, and the overlap between physics and virtual. This paper focus on flexible linear material as experiment subject, then compares several data augmentation approaches: from 2D OpenCV homogeneous transformation, autonomous robot arm nodes depth probes, to overlap judgement by HWD. Then we train an additive regression model with back-testing DTM datasets and use the gradient boosting algorithm to inference an approximate 3D coordinate datasets with 2D OpenCV datasets to shorten the elapsed time. After all, this paper proposes a flexible mechanism to train a vision-based autonomous robot by combing different hand-eye relationship, HWD posture, and DTM in a recursive workflow for further researchers.
更多
查看译文
关键词
deformation variables predictions,material bending analysis,autonomous robot,vision-based
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要