2010
DOI: 10.1007/978-3-642-16687-7_66
|View full text |Cite
|
Sign up to set email alerts
|

An Overproduce-and-Choose Strategy to Create Classifier Ensembles with Tuned SVM Parameters Applied to Real-World Fault Diagnosis

Abstract: We present a supervised learning classification method for model-free fault detection and diagnosis, aiming to improve the maintenance quality of motor pumps installed on oil rigs. We investigate our generic fault diagnosis method on 2000 examples of real-world vibrational signals obtained from operational faulty industrial machines. The diagnostic system detects each considered fault in an input pattern using an ensemble of classifiers, which is composed of accurate classifiers that differ on their prediction… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 9 publications
0
1
0
Order By: Relevance
“…The wrapper methods select the features based on the resulting classification performance; hence, the learning task is a part of the feature selection process. Additionally, wrapper methods have been used for multi-label data feature selection (Dendamrongvit, Vateekul & Kubat, 2011;Wandekokem, Varejão & Rauber, 2010). In filter methods, the best set of features is selected using the statistical characteristics of data (e.g., the correlation among features and classes).…”
Section: Introductionmentioning
confidence: 99%
“…The wrapper methods select the features based on the resulting classification performance; hence, the learning task is a part of the feature selection process. Additionally, wrapper methods have been used for multi-label data feature selection (Dendamrongvit, Vateekul & Kubat, 2011;Wandekokem, Varejão & Rauber, 2010). In filter methods, the best set of features is selected using the statistical characteristics of data (e.g., the correlation among features and classes).…”
Section: Introductionmentioning
confidence: 99%