2005 IEEE Congress on Evolutionary Computation 2005
DOI: 10.1109/cec.2005.1555009
|View full text |Cite
|
Sign up to set email alerts
|

GA-facilitated KNN classifier optimization with varying similarity measures

Abstract: Abstract-Genetic algorithms are powerful tools for knearest neighbors classifier optimization. While traditional knn classification techniques typically employ Euclidian distance to assess pattern similarity, other measures may also be utilized. Previous research demonstrates that GAs can improve predictive accuracy by searching for optimal feature weights and offsets for a cosine similarity-based knn classifier. GA-selected weights determine the classification relevance of each feature, while offsets provide … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0
1

Year Published

2008
2008
2019
2019

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 27 publications
(18 citation statements)
references
References 13 publications
0
17
0
1
Order By: Relevance
“…For instance, it is simple to understand and interpret, and it is able to handle nominal and categorical data and perform well with large data set in a short time. In this work, we use C4.5 decision tree to predict the direction change of stock price because C4.5 decision tree performs well in prediction application as report in Peterson et al (2005).…”
Section: Decision Treementioning
confidence: 99%
See 1 more Smart Citation
“…For instance, it is simple to understand and interpret, and it is able to handle nominal and categorical data and perform well with large data set in a short time. In this work, we use C4.5 decision tree to predict the direction change of stock price because C4.5 decision tree performs well in prediction application as report in Peterson et al (2005).…”
Section: Decision Treementioning
confidence: 99%
“…The kth nearest neighbor (KNN) algorithm (Kelly et al, 1991;Peterson, Doom, & Raymer, 2005) is a classification algorithm based on closest training example feature space. The training phase of the algorithm consists of storing the feature vectors and class labels of the training samples.…”
Section: Kth Nearest Neighboringmentioning
confidence: 99%
“…Other similarity measures have been applied individually in the decision rule for classification purpose. For example density similarity measure used in density based classification [16] and data gravitation based classification [17].The classification method proposed in [18] with varying similarity measures (Euclidean distance, cosine similarity, and Pearson correlation) represents the first attempt. However, combining between other similarity measures like density and gravity has the potential to present a better view of the data distribution and hidden patterns within the training samples.…”
Section: Related Workmentioning
confidence: 99%
“…In order to analyze such data sets, clustering methods are often used as an integral tool for data preprocessing [1]. This is especially the case when data is recorded from multiple sources in an uncontrolled environment.…”
Section: Introductionmentioning
confidence: 99%
“…An objective of clustering is to identify parts of the data that has high degrees of similarity with other parts of the data, and group the similar parts together into clusters. Similarity can be measured by many means such as Euclidean distance, Manhalobis distance, cosine similarity, and Pearson correlation [1].…”
Section: Introductionmentioning
confidence: 99%