Foreign object debris (FOD) impacts significantly on the quality control during product assembly because it usually causes product failure. The vision-based method as a nondestructive and efficient technology has become an important approach to FOD detection. However, it faces two important challenges: (1) inexhaustible types (almost any object can become FOD) and (2) unpredictable locations (FOD can appear almost anywhere on surface of a product). Therefore, this paper proposes an FOD visual detection method based on doubt–confirmation strategy and aided by assembly models. Firstly, a coarse-to-fine method is designed for feature extraction and registration to align the test image with the reference image. Then, to solve the unpredictable location problem, different types of suspected FOD are extracted from the test image by a combined method of supervision and nonsupervision. Finally, to solve the inexhaustible type problem, an image comparison method based on a Histogram of Line Direction Angle is proposed, and re-recognition rules of suspected FOD established to complete the final discrimination. Experiments are conducted on a product with complex shape, and the results demonstrate the effectiveness and efficiency of our approach.
Vision‐based pose estimation is a basic task in many industrial fields such as bin‐picking, autonomous assembly, and augmented reality. One of the most commonly used pose estimation methods first detects the 2D pose keypoints in the input image and then calculates the 6D pose using a pose solver. Recently, deep learning is widely used in pose keypoint detection and performs excellent accuracy and adaptability. However, its over‐reliance on sufficient and high‐quality samples and supervision is prominent, particularly in the industrial field, leading to high data cost. Based on domain adaptation and computer‐aided‐design (CAD) models, herein, a virtual‐to‐real knowledge transfer method for pose keypoint detection to reduce the data cost of deep learning is proposed. To address the disorder of knowledge flow, a viewpoint‐driven feature alignment strategy is proposed to simultaneously eliminate interdomain differences and preserve intradomain differences. The shape invariance of rigid objects is then introduced as constraints to address the large assumption space problem in the regressive domain adaptation. The multidimensional experimental results demonstrate the superiority of the method. Without real annotations, the normalized pixel error of keypoint detection is reported as 0.033, and the proportion of pixel errors lower than 0.05 is up to 92.77%.
Recently, the augmented reality technology has become a useful tool for assembly guidance. The projectors have been always used as virtual image output devices. In most situations, real-time and dynamic images projection is essential due to that the components to be assembled are randomly placed and movable. However, the cameras and the projectors are placed in different relative positions, making it difficult to project real time images when we are using augmented reality for assembly. A novel method based on the system of binocular cameras and projector was proposed here to overcome the limitation. We established a method to get the relations of coordinate transform among camera coordinate system, projector coordinate system and world coordinate system based on real-time internal parameter matrix of the projector that we derived. Obtaining the pose information of the cameras without any designed markers in real world was also realized, which is the key technology for the camera-projector assembly visualization system. An assembly experiment of cable laying was conducted and the results showed that using the proposed method the real-time projection for augmented reality assisted assembly was realized.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.