基本信息
浏览量:517
职业迁徙
个人简介
The purpose of our lab's research program is to advance visual navigation of mobile robots. Our work finds application in transportation, planetary exploration, mining, warehouses, offices, and military scenarios.
Over the last 15 years, our lab spent a lot of time building and testing different navigation approaches. Much of our work is focused on a navigation stack we pioneered called visual teach and repeat (VT&R). VT&R has been particularly interesting in that it allows a robot to repeat a long (several kilometre) route that was taught manually, using only a single vision sensor (stereo camera, lidar, kinect) for feedback (no GPS needed). VT&R has been successful because it avoids the need to construct a visual map of the world in a single priviledged coordinate frame and instead utilizes a topometric map. We also spent a lot of time improving the robustness of visual localization in the presence of lighting and seasonal change.
Today we are quite interested in the idea of generalizability. New rich sensors are coming out all the time and to build something like VT&R, it takes a lot of software engineering and testing. Even porting navigation software from one robot to another similar robot inevitably involves tuning many parameters to maximize performance. The vision we are working towards is the idea of a generalized navigation framework that would work with any robot base and any rich sensor. The structure or template of the navigation framework can ideally stay the same, but the details need to be filled in for each new robot/sensor combination (e.g., how to model sensors and extract features?, how to model motions?, where are the sensors located on the robot?, what are the sensor calibration parameters?, what are the controller gains?). This is where data and machine learning can help us. We would like to be able to simply gather input/output data for our new robot, identify/learn all the necessary details for a given task, then auto-generate the navigation stack based on a template. We think this is possible and it will require carefully blending ideas from classical robotics with machine learning. Please have a look at our recent papers for progress towards this challenging goal.
Over the last 15 years, our lab spent a lot of time building and testing different navigation approaches. Much of our work is focused on a navigation stack we pioneered called visual teach and repeat (VT&R). VT&R has been particularly interesting in that it allows a robot to repeat a long (several kilometre) route that was taught manually, using only a single vision sensor (stereo camera, lidar, kinect) for feedback (no GPS needed). VT&R has been successful because it avoids the need to construct a visual map of the world in a single priviledged coordinate frame and instead utilizes a topometric map. We also spent a lot of time improving the robustness of visual localization in the presence of lighting and seasonal change.
Today we are quite interested in the idea of generalizability. New rich sensors are coming out all the time and to build something like VT&R, it takes a lot of software engineering and testing. Even porting navigation software from one robot to another similar robot inevitably involves tuning many parameters to maximize performance. The vision we are working towards is the idea of a generalized navigation framework that would work with any robot base and any rich sensor. The structure or template of the navigation framework can ideally stay the same, but the details need to be filled in for each new robot/sensor combination (e.g., how to model sensors and extract features?, how to model motions?, where are the sensors located on the robot?, what are the sensor calibration parameters?, what are the controller gains?). This is where data and machine learning can help us. We would like to be able to simply gather input/output data for our new robot, identify/learn all the necessary details for a given task, then auto-generate the navigation stack based on a template. We think this is possible and it will require carefully blending ideas from classical robotics with machine learning. Please have a look at our recent papers for progress towards this challenging goal.
研究兴趣
论文共 285 篇作者统计合作学者相似作者
按年份排序按引用量排序主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
IEEE ROBOTICS AND AUTOMATION LETTERSno. 2 (2024): 1572-1579
IEEE Robotics and Automation Lettersno. 99 (2023): 1-8
加载更多
作者统计
合作学者
合作机构
D-Core
- 合作者
- 学生
- 导师
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn