2013
DOI: 10.1016/j.ijepes.2013.04.020
|View full text |Cite
|
Sign up to set email alerts
|

Insulation failure detection in transformer winding using cross-correlation technique with ANN and k-NN regression method during impulse test

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0
3

Year Published

2016
2016
2022
2022

Publication Types

Select...
6
2
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 24 publications
(9 citation statements)
references
References 25 publications
0
6
0
3
Order By: Relevance
“…As with other strategies, the stacking ensemble performs a combination of weak models to result in a model with greater processing capacity (meta-model). For a classification task, beyond the SVR the weak learners can be, for instance, support vector machine (SVM) [58]- [60], k-nearest neighbors (k-NN) [61]- [63], or decision trees [64]- [66]. The artificial neural network will take as inputs the results of the weak learners and it will return the final predictions based on these [67].…”
Section: Ensemble Learning Modelmentioning
confidence: 99%
“…As with other strategies, the stacking ensemble performs a combination of weak models to result in a model with greater processing capacity (meta-model). For a classification task, beyond the SVR the weak learners can be, for instance, support vector machine (SVM) [58]- [60], k-nearest neighbors (k-NN) [61]- [63], or decision trees [64]- [66]. The artificial neural network will take as inputs the results of the weak learners and it will return the final predictions based on these [67].…”
Section: Ensemble Learning Modelmentioning
confidence: 99%
“…The k-NN is a memory-based algorithm, so all computation is postponed until the classification phase, since the learning process consists of memorizing objects [60]. One of the advantages of k-NN is that the model is simple; thus, it requires less computational effort than methods based on deep learning because, during training, the algorithm only stores objects.…”
Section: Model Architecturementioning
confidence: 99%
“…The k-NN is a memory-based algorithm, so all computation is postponed until the classification phase, since the learning process consists of memorizing objects [40]. One of the advantages of k-NN is that the model is simple, thus it requires less computational effort than methods based on deep learning because during training the algorithm only stores objects.…”
Section: Model Architecturementioning
confidence: 99%