Due to the increase in the difficulty and diversity of tasks performed by robots, robot "hand-eye" collaborative operation has attracted widespread attention. This technology is widely used in aerospace, medical, automotive, and industrial fields. Recently, hand-eye calibration technology is developing towards high precision and high intelligence. However, it has much work to be done in terms of identifying robot and camera parameters. This article introduces in detail the methods and theories involved in hand-eye calibration. According to the structure of the algorithm and the type of the optimization method, this paper summarizes the hand-eye calibration method into four steps: camera pose, mechanical claw pose, mathematical model, and error metrics. The well-known open problems about hand-eye calibration are finally stated, and some new research interests are also pointed out. The results of this review are useful for robot technicians to choose the correct parameter identification method and for researchers to determine further research areas.
The explosive ordnance disposal (EOD) robots work in a special environment, which requires dual robotic arms to work together for removing the bomb. Therefore, machine vision is very important to locating bombs, especially the accuracy of the hand-eye calibration technology. A basic problem encountered with the collaborative work of dual robotic arms is to solve the unknown homogeneous transformation matrix, including: hand-eye of robotic arm 1, base-base, and camera-end effector's robotic arm 2. In this article, the hand-eye calibration problem of the dual robotic arm system is expressed as two matrix equations. A new method of simultaneously solving the unknowns in the matrix equation is proposed. This method consists of a closed-form method based on the Kronecker product and an iterative method that transforms a nonlinear problem with a convex function optimization problem. The closed-form method is used to quickly obtain the initial value of the iteration method to improve the efficiency and accuracy of the iteration.In addition, we propose a hand-eye calibration method based on the re-projection error of the RGB-D camera. In order to prove the feasibility and superiority in the proposed iterative method, we conducted simulation and actual experiments and compared them with the other two calibration methods. The comparison results verify the superiority in the proposed method in terms of accuracy.INDEX TERMS EOD robot, RGB-D camera, simultaneous hand-eye calibration of the dual robot, nonlinear optimization, re-projection error.
In this paper, an optimized kinematic modeling method to accurately describe the actual structure of a mobile manipulator robot with a manipulator similar to the universal robot (UR5) is developed, and an improved self-collision detection technology realized for improving the description accuracy of each component and reducing the time required for approximating the whole robot is introduced. As the primary foundation for trajectory tracking and automatic navigation, the kinematic modeling technology of the mobile manipulator has been the subject of much interest and research for many years. However, the kinematic model established by various methods is different from the actual physical model due to the fact that researchers have mainly focused on the relationship between driving joints and the end positions while ignoring the physical structure. To improve the accuracy of the kinematic model, we present a kinematic modeling method with the addition of key points and coordinate systems to some components that failed to model the physical structure based on the classical method. Moreover, self-collision detection is also a primary problem for successfully completing the specified task of the mobile manipulator. In traditional self-collision detection technology, the description of each approximation is determined by the spatial transformation of each corresponding component in the mobile manipulator robot. Unlike the traditional technology, each approximation in the paper is directly established by the physical structure used in the kinematic modeling method, which significantly reduces the complicated analysis and shortens the required time. The numerical simulations prove that the kinematic model with the addition of key point technology is similar to the actual structure of mobile manipulator robots, and the self-collision detection technology proposed in the article effectively improves the performance of self-collision detection. Additionally, the experimental results prove that the kinematic modeling method and self-collision detection technology outlined in this paper can optimize the inverse kinematics solution.
Image moments are global descriptors of an image and can be used to achieve control-decoupling properties in visual servoing. However, only a few methods completely decouple the control. This study introduces a novel camera pose estimation method, which is a closed-form solution, based on the image moments of planar objects. Traditional position-based visual servoing estimates the pose of a camera relative to an object, but the pose estimation method directly estimates the pose of an initial camera relative to a desired camera. Because the estimation method is based on plane parameters, a plane parameters estimation method based on the 2D rotation, 2D translation, and scale invariant moments is also proposed. A completely decoupled position-based visual servoing control scheme from the two estimation methods above was adopted. The new scheme exhibited asymptotic stability when the object plane was in the camera field of view. Simulation results demonstrated the effectiveness of the two estimation methods and the advantages of the visual servo control scheme compared with the classical method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.