2005
DOI: 10.5565/rev/elcvia.96
|View full text |Cite
|
Sign up to set email alerts
|

A K Nearest Classifier design

Abstract: This paper presents a multi-classifier system design controlled by the topology of the learning data. Our work also introduces a training algorithm for an incremental self-organizing map (SOM). This SOM is used to distribute classification tasks to a set of classifiers. Thus, the useful classifiers are activated when new data arrives. Comparative results are given for synthetic problems, for an image segmentation problem from the UCI repository and for a handwritten digit recognition problem.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2007
2007
2019
2019

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…Our aim is rather to understand the role of the parameter values on the behavior of the RF. That is why we have decided to arbitrarily choose a commonly used feature extraction technique based on a greyscale multi-resolution pyramid [14]. We have extracted for each image of our set, 84 greyscale mean values based on four resolution levels of the image, as illustrated in figure 1.…”
Section: Experimental Protocolmentioning
confidence: 99%
“…Our aim is rather to understand the role of the parameter values on the behavior of the RF. That is why we have decided to arbitrarily choose a commonly used feature extraction technique based on a greyscale multi-resolution pyramid [14]. We have extracted for each image of our set, 84 greyscale mean values based on four resolution levels of the image, as illustrated in figure 1.…”
Section: Experimental Protocolmentioning
confidence: 99%
“…For instance, Bernard et al [16] test random forest classifier on MNIST dataset. In this work, the grayscale multi-resolution pyramid method [17] is used as a feature extraction technique. Using the verified data for selecting parameters of random forest classifier, they obtain a success accuracy of 93:27%.…”
Section: Handwritten Digit Recognition Methodsmentioning
confidence: 99%