Lazy Learning 1997
DOI: 10.1007/978-94-017-2053-3_11
|View full text |Cite
|
Sign up to set email alerts
|

A Review and Empirical Evaluation of Feature Weighting Methods for a Class of Lazy Learning Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
314
0
9

Year Published

2001
2001
2017
2017

Publication Types

Select...
6
2
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 241 publications
(323 citation statements)
references
References 37 publications
0
314
0
9
Order By: Relevance
“…In the AI arena there are two main streams. The first one looks for weightings of the attributes values to be used when we measure the similarity of observed cases in the past to solve present situations (Blum & Langley, 1997;Wettschereck, Aha, & Mohri, 1997); usually, the assigned weightings range between 0 and 1. On the other hand, there are the proposals that search for a subset of relevant or not redundant attributes to be taken into account by any ML algorithm (John, Kohavi, & Pfleger, 1994); these methodologies select the more relevant attributes, removing the rest.…”
Section: Machine Learning Outputsmentioning
confidence: 99%
“…In the AI arena there are two main streams. The first one looks for weightings of the attributes values to be used when we measure the similarity of observed cases in the past to solve present situations (Blum & Langley, 1997;Wettschereck, Aha, & Mohri, 1997); usually, the assigned weightings range between 0 and 1. On the other hand, there are the proposals that search for a subset of relevant or not redundant attributes to be taken into account by any ML algorithm (John, Kohavi, & Pfleger, 1994); these methodologies select the more relevant attributes, removing the rest.…”
Section: Machine Learning Outputsmentioning
confidence: 99%
“…4 For the NN classifier, we used the simple Euclidean distance measure, and we didn't include feature selection or feature weighting (even though it is well-known that irrelevant features can badly deteriorate and, on the other hand, that feature weighting can greatly improve performance [32]). In fact, we didn't try to optimize the performance of the three learning methods themselves, because this was not the goal of the experiments.…”
Section: Methodsmentioning
confidence: 99%
“…To improve classification accuracy, we apply feature weighting, in which the features are scaled so that their numerical range better matches their relevance [17,28]. Feature selection is implicitly included by allowing the weight of a feature to be zero.…”
Section: Genetic Optimization For Feature Weightingmentioning
confidence: 99%