2017 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA) 2017
DOI: 10.1109/etfa.2017.8247642
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection using non-binary decision trees applied to condition monitoring

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 27 publications
0
7
0
1
Order By: Relevance
“…The features that are traditionally used for CM (RMS, crest factor, kurtosis, …, in the time domain; amplitude of power spectra, band power, envelope, …, in the frequency domain) [4,12,13,14,19,20,21,22,23,24,25], and that are considered in this work, are useful in most applications to maintain the relevant information about the process or tool conditions [4].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The features that are traditionally used for CM (RMS, crest factor, kurtosis, …, in the time domain; amplitude of power spectra, band power, envelope, …, in the frequency domain) [4,12,13,14,19,20,21,22,23,24,25], and that are considered in this work, are useful in most applications to maintain the relevant information about the process or tool conditions [4].…”
Section: Methodsmentioning
confidence: 99%
“…Trends of the features calculated on the basis of Tracking Deviation (TD), at different jload [22]. Repeatability is shown as error bars.…”
Section: Figurementioning
confidence: 99%
See 1 more Smart Citation
“…A classification method belongs to supervised learning category, and it is applicable in cases where the overall aim is to accurately assign a datapoint to a class [78][79][80]. There is a broad range of classification methods as presented in Table 2, in the scope of SD, that clearly shows the impact and potential use of these techniques in conjunction with EO data.…”
Section: Classificationmentioning
confidence: 99%
“…Dimension reduction, similar to clustering method, belongs to the unsupervised learning category and typically follow two main approaches: Feature Selection (FS), applicable when there is the necessity to select fewer characteristics [111,112]; and Feature Extraction, when the information needs to be synthesised through transformation. The aim is to create a small set of features covering much of the details in the initial dataset [79,113,114]. Then, these features/characteristics can be fed into other algorithms or otherwise used as an end result [78].…”
Section: Dimension Reductionmentioning
confidence: 99%