Homography-Based Visual Servoing of Robot Pose Under an Uncalibrated Eye-to-Hand Camera

IEEE-ASME TRANSACTIONS ON MECHATRONICS(2023)

Cited 0|Views3
No score
Abstract
Visual servoing can enhance the flexibility of robot control in unstructured environments. 3-D robot visual servoing is nontrivial since the image space is 2-D. Homography-based visual servoing (HBVS) with adaptation has been investigated to achieve 3-D pose control under parametric uncertainties, but it relies on a stringent condition termed persistent excitation (PE) for parameter convergence, which restricts the performance of pose control. In this article, we propose a dynamics-based control method called composite learning HBVS for robotic 3-D pose regulation under eye-to-hand monocular cameras with uncalibrated extrinsic parameters, where a composite learning law is applied to achieve camera parameter convergence without PE. Compared with existing adaptive HBVS methods, the proposed method achieves the accurate estimation of the camera's extrinsic parameters under a weaker condition of interval excitation, which avoids the tedious offline calibration. Besides, it requires no measurement of reference plane rotations or the detection of valid vanishing points, which makes practical applications convenient. Simulations and experiments on an articulated collaborative robot with 7 degrees of freedom have verified that the proposed method performs well for both extrinsic parameter estimation and robot pose control.
More
Translated text
Key words
Cameras,Robots,Robot kinematics,Convergence,Adaptation models,Visualization,Visual servoing,Composite learning,dynamics-based control,parameter convergence,robot pose control,uncalibrated camera,vision-based control
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined