Traditionally, computer vision solutions for detecting elements of interest (e.g., defects) are based on strict context-sensitive implementations to address contained problems with a set of well-defined conditions. On the other hand, several machine learning approaches have proven their generalization capacity, not only to improve classification continuously, but also to learn from new examples, based on a fundamental aspect: the separation of data from the algorithmic setup. The findings regarding backward-propagation and the progresses built upon graphical cards technologies boost the advances in machine learning towards a subfield known as deep learning that is becoming very popular among many industrial areas, due to its even greater robustness and flexibility to map and deal knowledge that is typically handled by humans, with, also, incredible scalability proneness. Fabric defect detection is one of the manual processes that has been progressively automatized resorting to the aforementioned approaches, as it is an essential process for quality control. The goal is manifold: reduce human error, fatigue, ergonomic issues and associated costs, while simultaneously improving the expeditiousness and preciseness of the involved tasks, with a direct impact on profit. Following such research line with a specific focus in the textile industry, this work aims to constitute a brief review of both defect types and Automated Optical Inspection (AOI) mostly based on machine learning techniques, which have been proving their effectiveness in identifying anomalies within the context of textile material analysis. The inclusion of Convolutional Neural Network (CNN) based on known architectures such as AlexNet or Visual Geometry Group (VGG16) on computerized defect analysis allowed to reach accuracies over 98%. A short discussion is also provided along with an analysis of the current state characterizing this field of intervention, as well as some future challenges.
Recently released research about deep learning applications related to perception for autonomous driving focuses heavily on the usage of LiDAR point cloud data as input for the neural networks, highlighting the importance of LiDAR technology in the field of Autonomous Driving (AD). In this sense, a great percentage of the vehicle platforms used to create the datasets released for the development of these neural networks, as well as some AD commercial solutions available on the market, heavily invest in an array of sensors, including a large number of sensors as well as several sensor modalities. However, these costs create a barrier to entry for low-cost solutions for the performance of critical perception tasks such as Object Detection and SLAM. This paper explores current vehicle platforms and proposes a low-cost, LiDAR-based test vehicle platform capable of running critical perception tasks (Object Detection and SLAM) in real time. Additionally, we propose the creation of a deep learning-based inference model for Object Detection deployed in a resource-constrained device, as well as a graph-based SLAM implementation, providing important considerations, explored while taking into account the real-time processing requirement and presenting relevant results demonstrating the usability of the developed work in the context of the proposed low-cost platform.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.