2013
DOI: 10.1007/s10462-013-9411-1
|View full text |Cite
|
Sign up to set email alerts
|

Efficient $$k$$ k -NN classification based on homogeneous clusters

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 26 publications
0
6
0
Order By: Relevance
“…Classifications for large datasets require high computational costs [16]. It introduced a new method namely Fast Hybrid Classification (FHC), where Two Level Data Structure were made using the k-Means algorithm, which further increased the speed to access TLDS.…”
Section: Methodsmentioning
confidence: 99%
“…Classifications for large datasets require high computational costs [16]. It introduced a new method namely Fast Hybrid Classification (FHC), where Two Level Data Structure were made using the k-Means algorithm, which further increased the speed to access TLDS.…”
Section: Methodsmentioning
confidence: 99%
“…On a very high level, this is similar to cluster-based NNC where class specific training data (data with same labels) is summarized as (multiple) cluster centers and used as a reduced training set on which NNC is applied -this is also known as prototype-based classification (CC), with the simplest form where there is a single cluster/prototype per class (CC1). A variety of methods exists in literature that adopt this simple idea of data reduction [9,11,24,25,27,35]. These algorithms are designed with the goal of reducing the high computational & storage requirements of NNC.…”
Section: Related Workmentioning
confidence: 99%
“…On a very high level, this is similar to cluster-based k-NNC where class specific training data (data with same labels) is summarized as (multiple) cluster centers and used as a reduced training set on which k-NNC is applied. A variety of methods exists in literature that adopt this simple idea of data reduction [Zhou et al, 2010, Parvin et al, 2012, Oigiaroglou and Evangelidis, 2013, 2016, Gallego et al, 2018, Gou et al, 2019. These algorithms are designed with the goal of reducing the high computational & storage requirements of k-NNC.…”
Section: Related Workmentioning
confidence: 99%