Digital-enabled manufacturing systems require a high level of automation for fast and low-cost production but should also present flexibility and adaptiveness to varying and dynamic conditions in their environment, including the presence of human beings; however, this presence of workers in the shared workspace with robots decreases the productivity, as the robot is not aware about the human position and intention, which leads to concerns about human safety. This issue is addressed in this work by designing a reliable safety monitoring system for collaborative robots (cobots). The main idea here is to significantly enhance safety using a combination of recognition of human actions using visual perception and at the same time interpreting physical human–robot contact by tactile perception. Two datasets containing contact and vision data are collected by using different volunteers. The action recognition system classifies human actions using the skeleton representation of the latter when entering the shared workspace and the contact detection system distinguishes between intentional and incidental interactions if physical contact between human and cobot takes place. Two different deep learning networks are used for human action recognition and contact detection, which in combination, are expected to lead to the enhancement of human safety and an increase in the level of cobot perception about human intentions. The results show a promising path for future AI-driven solutions in safe and productive human–robot collaboration (HRC) in industrial automation.
Low volume industrial productions are rarely highly automated because of the related costs. Variable production requires flexible automation with close human robot interaction. An exoskeleton may exactly provide these features to enhance industrial production. This article highlights the difficulties related to using exoskeletons in an industrial setting. Moreover, it introduces the Robo-Mate project -an EU funded project -targeted to address the application of an exoskeleton in industry.
Estimating the remaining useful life (RUL) of components is a crucial task to enhance reliability, safety, productivity, and to reduce maintenance cost. In general, predicting the RUL of a component includes constructing a health indicator (HI) to infer the current condition of the component, and modelling the degradation process in order to estimate the future behavior. Although many signal processing and data-driven methods have been proposed to construct the HI, most of the existing methods are based on manual feature extraction techniques and require the prior knowledge of experts, or rely on a large amount of failure data. Therefore, in this study, a new data-driven method based on the convolutional autoencoder (CAE) is presented to construct the HI. For this purpose, the continuous wavelet transform (CWT) technique was used to convert the raw acquired vibrational signals into a two-dimensional image; then, the CAE model was trained by the healthy operation dataset. Finally, the Mahalanobis distance (MD) between the healthy and failure stages was measured as the HI. The proposed method was tested on a benchmark bearing dataset and compared with several other traditional HI construction models. Experimental results indicate that the constructed HI exhibited a monotonically increasing degradation trend and had good performance in terms of detecting incipient faults.
Digital enabled manufacturing systems require high level of automation for fast and low-cost production but should also present flexibility and adaptiveness to varying and dynamic conditions in their environment, including the presence of human beings. This issue is addressed in this work by implementing a reliable system for real-time safe human-robot collaboration based upon the combination of human action and contact type detection systems. Two datasets containing contact and vision data are collected by using different volunteers. The action recognition system classifies human actions using the skeleton representation of the latter when entering the shared workspace and the contact detection system distinguishes between intentional and incidental interactions if a physical contact between human and robot takes place. Two different deep learning networks are used for human action recognition and contact detection which in combination, lead to the enhancement of human safety and an increase of the level of robot awareness about human intentions. The results show a promising path for future AI-driven solutions in safe and productive human–robot collaboration (HRC) in industrial automation.
Performing predictive maintenance (PdM) is challenging for many reasons. Dealing with large datasets which may not contain run-to-failure data (R2F) complicates PdM even more. When no R2F data are available, identifying condition indicators (CIs), estimating the health index (HI), and thereafter, calculating a degradation model for predicting the remaining useful lifetime (RUL) are merely impossible using supervised learning. In this paper, a 3 DoF delta robot used for pick and place task is studied. In the proposed method, autoencoders (AEs) are used to predict when maintenance is required based on the signal sequence distribution and anomaly detection, which is vital when no R2F data are available. Due to the sequential nature of the data, nonlinearity of the system, and correlations between parameter time-series, convolutional layers are used for feature extraction. Thereafter, a sigmoid function is used to predict the probability of having an anomaly given CIs acquired from AEs. This function can be manually tuned given the sensitivity of the system or optimized by solving a minimax problem. Moreover, the proposed architecture can be used for fault localization for the specified system. Additionally, the proposed method can calculate RUL using Gaussian process (GP), as a degradation model, given HI values as its input.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.