Usage of a three-dimensional (3-D) sensor and point clouds provides various benefits over the usage of a traditional camera for industrial inspection. We focus on the development of a classification solution for industrial inspection purposes using point clouds as an input. The developed approach employs deep learning to classify point clouds, acquired via a 3-D sensor, the final goal being to verify the presence of certain industrial elements in the scene. We possess the computer-aided design model of the whole mechanical assembly and an in-house developed localization module provides initial pose estimation from which 3-D point clouds of the elements are inferred. The accuracy of this approach is proved to be acceptable for industrial usage. Robustness of the classification module in relation to the accuracy of the localization algorithm is also estimated. Fig. 1 (a) Robot-based inspection system with three cameras mounted on an end effector and (b) Ensenso N35 3-D sensor.
Deep learning resulted in a huge advancement in computer vision. However, deep models require a large amount of manually annotated data, which is not easy to obtain, especially in a context of sensitive industries. Rendering of Computer Aided Design (CAD) models to generate synthetic training data could be an attractive workaround. This paper focuses on using Deep Convolutional Neural Networks (DCNN) for automatic industrial inspection of mechanical assemblies, where training images are limited and hard to collect. The ultimate goal of this work is to obtain a DCNN classification model trained on synthetic renders, and deploy it to verify the presence of target objects in never-seen-before real images collected by RGB cameras. Two approaches are adopted to close the domain gap between synthetic and real images. First, Domain Randomization technique is applied to generate synthetic data for training. Second, a novel approach is proposed to learn better features representations by means of self-supervision: we used an Augmented Auto-Encoder (AAE) and achieved results competitive to our baseline model trained on real images. In addition, this approach outperformed baseline results when the problem was simplified to binary classification for each object individually.
This paper shows the application of several learning-based image classification techniques to conformity check, which is a common problem in industrial visual inspection. The approaches are based on processing 2D images. First, a classification pipeline has been developed. An effort has been invested into choosing an appropriate classifier. First experiment was performed with HoG features (Histogram Of Gradient) and Support Vector Machine (SVM). Further, to improve accuracy, we employed a bag of visual words (BoVW) and ORB detector for extracting features that we further use to build our dictionary of visual words. The final solution uses features extracted by passing an image through a pre-trained deep convolutional neural network Inception. Using these features a SVM classifier was trained and high accuracy was obtained. To augment our image data set, different transformations such as zoom and shearing were applied. Promising results were obtained which shows that state-of-the-art deep learning classification techniques can be successfully employed in the visual industrial inspection field.
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.