2019
DOI: 10.1007/s40430-019-1607-0
|View full text |Cite
|
Sign up to set email alerts
|

A new model to distinguish welds performed by short-circuit GMAW based on FRESH algorithm and MLP ANN

Abstract: The short-circuit gas metal arc welding has been continuously studied over the years, due to its important role in manufacturing processes. Concerning the process, many kinds of research are carried out aiming to understand the influence of the shielding gas in welds quality. In this context, this work treats the voltage and current welding signals as time series and applies a feature extraction based on scalable hypothesis tests, which is called FRESH algorithm, in order to obtain the signal features. After t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…The methods applied for the HO decision (classification problem) are Autonomous Learning Multimodel System (ALMMo) [15] (which has no tuning parameters, since it extracts all the features and adjustments from data); Self-Organizing Fuzzy Logic Classifier (SOFL) [16] (using Mahalanobis distance and Granularity Level of 2.9); Type-2 Fuzzy Logic Classifier (T2FLS) [17], [18] (being the learning parameter = 0.01, tolerance = 10 − 8, 1 = 0.9 and 2 = 0.999); Support Vector Machine Classifier (SVM) [7], [19] (with linear kernel and penalty parameter of 10 and 100 for Scenarios 1 and 2, respectively); and Multilayer Perceptron Classifier (MLP) [20], [21] (with 6 neurons in hidden layers and solver lbfgs).…”
Section: The Proposed Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…The methods applied for the HO decision (classification problem) are Autonomous Learning Multimodel System (ALMMo) [15] (which has no tuning parameters, since it extracts all the features and adjustments from data); Self-Organizing Fuzzy Logic Classifier (SOFL) [16] (using Mahalanobis distance and Granularity Level of 2.9); Type-2 Fuzzy Logic Classifier (T2FLS) [17], [18] (being the learning parameter = 0.01, tolerance = 10 − 8, 1 = 0.9 and 2 = 0.999); Support Vector Machine Classifier (SVM) [7], [19] (with linear kernel and penalty parameter of 10 and 100 for Scenarios 1 and 2, respectively); and Multilayer Perceptron Classifier (MLP) [20], [21] (with 6 neurons in hidden layers and solver lbfgs).…”
Section: The Proposed Evaluationmentioning
confidence: 99%
“…Regarding the regression problem (estimation of the completed download percentage and the download duration), six methods are compared: Multilayer Perceptron Regressor (MLP) [20], [21] (with 22 and 4 neurons in hidden layers, tanh and logistic activation functions and lbfgs solver for Scenarios 1 and 2, respectively); KNN [3], [20] with 4 and 6 neighbors considered for each Scenario; Random Forest (RF) [7], [22] with 94 and 106 trees in the forest, in each case; Gradient Boosting Machine (GBM) [23], with 84 and 120 trees in their ensemble; Extreme Gradient Boosting (XGBoost) [9], with 174 and 120 estimators each; and Light Gradient Boosting Machine (LightGBM) [8], which used 148 and 139 estimators in the ensemble for each Scenario.…”
Section: The Proposed Evaluationmentioning
confidence: 99%