For the task of obstacle detection in a complex traffic environment, this paper proposes a road-free space extraction and obstacle detection method based on stereo vision. The proposed method combines the advantages of the V-disparity image and the Stixel method. Firstly, the depth information and the V-disparity image are calculated according to the disparity image. Then, the free space on the road surface is calculated through the RANSAC algorithm and dynamic programming (DP) algorithm. Furthermore, a new V-disparity image and a new U-disparity image are calculated by the disparity image after removing the road surface information. Finally, the height and width of the obstacles on the road are extracted from the new V-disparity image and U-disparity image, respectively. The detection of obstacles is realized by the height and width information of obstacles. In order to verify the method, we adopted the object detection benchmarks and road detection benchmarks of the KITTI dataset for verification. In terms of the accuracy performance indicators quality, detection rate, detection accuracy, and effectiveness, the method in this paper reaches 0.820, 0.863, 0.941, and 0.900, respectively, and the time consumption is only 5.145 milliseconds. Compared with other obstacle detection methods, the detection accuracy and real-time performance in this paper are better. The experimental results show that the method has good robustness and real-time performance for obstacle detection in a complex traffic environment.
In order to improve industrial production efficiency, a hand–eye system based on 3D vision is proposed and the proposed system is applied to the assembly task of workpieces. First, a hand–eye calibration optimization algorithm based on data filtering is proposed in this paper. This method ensures the accuracy required for hand–eye calibration by filtering out part of the improper data. Furthermore, the improved U-net is adopted for image segmentation and SAC-IA coarse registration ICP fine registration method is adopted for point cloud registration. This method ensures that the 6D pose estimation of the object is more accurate. Through the hand–eye calibration method based on data filtering, the average error of hand–eye calibration is reduced by 0.42 mm to 0.08 mm. Compared with other models, the improved U-net proposed in this paper has higher accuracy for depth image segmentation, and the Acc coefficient and Dice coefficient achieve 0.961 and 0.876, respectively. The average translation error, average rotation error and average time-consuming of the object recognition and pose estimation methods proposed in this paper are 1.19 mm, 1.27°, and 7.5 s, respectively. The experimental results show that the proposed system in this paper can complete high-precision assembly tasks.
To solve the problem of inflexibility of offline hand–eye calibration in “eye-in-hand” modes, an online hand–eye calibration method based on the ChArUco board is proposed in this paper. Firstly, a hand–eye calibration model based on the ChArUco board is established, by analyzing the mathematical model of hand–eye calibration, and the image features of the ChArUco board. According to the advantages of the ChArUco board, with both the checkerboard and the ArUco marker, an online hand–eye calibration algorithm based on the ChArUco board is designed. Then, the online hand–eye calibration algorithm, based on the ChArUco board, is used to realize the dynamic adjustment of the hand–eye position relationship. Finally, the hand–eye calibration experiment is carried out to verify the accuracy of the hand–eye calibration based on the ChArUco board. The robustness and accuracy of the proposed method are verified by online hand–eye calibration experiments. The experimental results show that the accuracy of the online hand–eye calibration method proposed in this paper is between 0.4 mm and 0.6 mm, which is almost the same as the offline hand–eye calibration accuracy. The method in this paper utilizes the advantages of the ChArUco board to realize online hand–eye calibration, which improves the flexibility and robustness of hand–eye calibration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.