2022
DOI: 10.3390/computers11040054
|View full text |Cite
|
Sign up to set email alerts
|

Isolation Forests and Deep Autoencoders for Industrial Screw Tightening Anomaly Detection

Abstract: Within the context of Industry 4.0, quality assessment procedures using data-driven techniques are becoming more critical due to the generation of massive amounts of production data. In this paper, we address the detection of abnormal screw tightening processes, which is a key industrial task. Since labeling is costly, requiring a manual effort, we focus on unsupervised detection approaches. In particular, we assume a computationally light low-dimensional problem formulation based on angle–torque pairs. Our wo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 26 publications
0
5
0
Order By: Relevance
“…The discrimination performance is given by the AUC = 1 0 ROCdK. It should be noted that the AUC measure is a popular measure that contains two main advantages [34,35]: quality values are not affected by the imbalanced rate of the target class; and quality values are easy to interpret (50% -performance of a random classifier; 70% -good; 80% -very good; 90% -excellent; and 100% -ideal classifier). After executing the 5-fold cross-validation, the AUC results are aggregated by computing the median value (which is less sensitive to outliers when compared withe the average value).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The discrimination performance is given by the AUC = 1 0 ROCdK. It should be noted that the AUC measure is a popular measure that contains two main advantages [34,35]: quality values are not affected by the imbalanced rate of the target class; and quality values are easy to interpret (50% -performance of a random classifier; 70% -good; 80% -very good; 90% -excellent; and 100% -ideal classifier). After executing the 5-fold cross-validation, the AUC results are aggregated by computing the median value (which is less sensitive to outliers when compared withe the average value).…”
Section: Discussionmentioning
confidence: 99%
“…Let (L I , L 1 , ..., L H , L O ) denote the structure of a dense (fully connected) Deep FeedForward Network (DFFN) with the layer node sizes, where L I and L O represent the input and output layer sizes and H is the number of hidden layers. The proposed AE is based on an architecture that previously obtained high quality anomaly detection results in a industrial anomaly detection task [26,34,35]. It assumes L I = L O , a symmetrical encoder and decoder structure (e.g., L 1 = L O−1 ) and the popular ReLu activation function is used by all hidden neural units with the exception of the output layer, which assumes a linear activation function.…”
Section: Anomaly Detection Methodsmentioning
confidence: 99%
“…Besides the ANN structure, deep learning architectures include a large number of additional hyperparameters. In order to reduce the search space, using modeling knowledge from previous OCC works (e.g., [39,13]) we fixed some choices, such as the usage of the MAE measure as the loss function for both AE and VAE and usage of Batch Normalization layers for AE. We also restricted the search space for some hyperparameters.…”
Section: Autooc Grammarmentioning
confidence: 99%
“…Also known as unary classification, OCC can be viewed as a subclass of unsupervised learning, where the Machine Learning (ML) model only learns using training examples from a single class [8,9]. This type of learning is valuable in diverse real-world scenarios where labeled data is non-existent, infeasible, or difficult (e.g., requiring a costly and slow manual class assignment), such as fraud detection [10], cybersecurity [11], predictive maintenance [12] or industrial quality assessment [13].…”
Section: Introductionmentioning
confidence: 99%
“…CANE was used in several scientific studies as a means of reducing the number of inputs (after the categorical to numeric transform) to feed predictive ML models. Diverse real-world applications were addressed, including mobile performance marketing [5,9], Industry 4.0 anomaly detection [4,10] and quality prediction [8,11]. As a consequence of the adoption of CANE python package, the aforementioned studies have observed a reduced computational effort when preprocessing categorical data.…”
Section: Cane Impact and Computational Performancementioning
confidence: 99%