2018 International Joint Conference on Neural Networks (IJCNN) 2018
DOI: 10.1109/ijcnn.2018.8489269
|View full text |Cite
|
Sign up to set email alerts
|

An Ensemble Generation Method Based on Instance Hardness

Abstract: In Machine Learning, ensemble methods have been receiving a great deal of attention. Techniques such as Bagging and Boosting have been successfully applied to a variety of problems. Nevertheless, such techniques are still susceptible to the effects of noise and outliers in the training data. We propose a new method for the generation of pools of classifiers based on Bagging, in which the probability of an instance being selected during the resampling process is inversely proportional to its instance hardness, … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
12
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(13 citation statements)
references
References 13 publications
1
12
0
Order By: Relevance
“…For instance, in the paper by Cruz et al (2017), the authors used IH to identify the scenarios where an ensemble with dynamic selection techniques outperform the K-NN classifier. The IH has also been used in ensemble generation methods (Walmsley et al, 2018;Souza et al, 2018a;Kabir et al, 2018).…”
Section: Instance Hardness (Ih)mentioning
confidence: 99%
“…For instance, in the paper by Cruz et al (2017), the authors used IH to identify the scenarios where an ensemble with dynamic selection techniques outperform the K-NN classifier. The IH has also been used in ensemble generation methods (Walmsley et al, 2018;Souza et al, 2018a;Kabir et al, 2018).…”
Section: Instance Hardness (Ih)mentioning
confidence: 99%
“…Instead of a single DSEL set, filtered by ENN, we propose combining the output of several classifier ensembles, each using a bootstrapped Dynamic Selection set based on the original DSEL, using Instance Hardness as a means to determine the selection probability of each instance in the DSEL. As in (WALMSLEY et al, 2018), we chose this probabilistic filtering approach for its ability to filter out the noisy data, while still allowing for the retention of hard instances in the class borders, with non-zero probability. The results for our proposal are compared not only against the method of Cruz et al, but also against the traditional Overall Local Accuracy, Local Class Accuracy (WOODS; KEGELMEYER; BOWYER, 1997), K-Nearest Oracles-Eliminate and K-Nearest Oracles-Union (KO; SABOURIN; BRITTO, 2008) algorithms.…”
Section: List Of Tablesmentioning
confidence: 99%
“…Nevertheless, we must first define what we mean by noise. Here, we adopt the definition used in (WALMSLEY et al, 2018), in which noise is considered as any process that changes the true value of a variable from its original value.…”
Section: Noise and Classification Problemsmentioning
confidence: 99%
“…These points may impair the performance of the GMM over time. To overcome them, we use k-Disagreeing Neighbors (kDN) [24], defined in Eq. 7, as a noise filter.…”
Section: Noise Filtermentioning
confidence: 99%