1992
DOI: 10.1016/0020-7373(92)90018-g
|View full text |Cite
|
Sign up to set email alerts
|

Tolerating noisy, irrelevant and novel attributes in instance-based learning algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
177
0
2

Year Published

2002
2002
2018
2018

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 324 publications
(179 citation statements)
references
References 10 publications
0
177
0
2
Order By: Relevance
“…LLP has been tested with both labeling heuristics (see Sect. 3.3), for cluster sizes k ∈ [2,12]. As parameters for the evolutionary strategy, we used maxgen = 10, psize = 25, mutvar = 1.0, crossprob = 0.3 and tournsize = 0.25.…”
Section: Prediction Performance Experimentsmentioning
confidence: 99%
See 2 more Smart Citations
“…LLP has been tested with both labeling heuristics (see Sect. 3.3), for cluster sizes k ∈ [2,12]. As parameters for the evolutionary strategy, we used maxgen = 10, psize = 25, mutvar = 1.0, crossprob = 0.3 and tournsize = 0.25.…”
Section: Prediction Performance Experimentsmentioning
confidence: 99%
“…However, the cluster methods also assign labels to each observation in sample U , allowing for a subsequent training of other classifiers. Based on the clustering results, we have trained models for Naïve Bayes [14], kNN [2], Decision Trees [20], Random Forests [4], and the SVM [22] with linear and radial basis kernel. The model parameters have been optimized by a grid or evolutionary search.…”
Section: Prediction Performance Experimentsmentioning
confidence: 99%
See 1 more Smart Citation
“…The nearest neighbour algorithm is one of the most popular of CBR algorithms [14]. The formulation of this algorithm, called NN, or k-NN in the more sophisticated version is extremely simple.…”
Section: Case-based Reasoning and Support Vector Machinesmentioning
confidence: 99%
“…Important special cases of the above general form are the so-called Akaike Information Criterion (AIC) [Akaike 1974] and the Bayesian Information Criterion (BIC) [Schwarz 1978]. The former results for κ = 2 and is derived from asymptotic decision theoretic considerations.…”
Section: Information Criteriamentioning
confidence: 99%