2012
DOI: 10.1007/s11263-012-0539-2
|View full text |Cite
|
Sign up to set email alerts
|

Boosting k-NN for Categorization of Natural Scenes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
21
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(22 citation statements)
references
References 36 publications
1
21
0
Order By: Relevance
“…The boosting method has been employed in a number of computer vision applications: classification of handwritten data [3], real time face detection [44], object tracking [33], classification of trees and vehicles in urban scenes [45], categorization of natural scenes [31], keypoint detection and landmarking on human faces [10], and person re-identification [30]. It has not yet been used to evaluate point matches between overlapping 3D partial shapes, perhaps for several reasons.…”
Section: Boostingmentioning
confidence: 99%
“…The boosting method has been employed in a number of computer vision applications: classification of handwritten data [3], real time face detection [44], object tracking [33], classification of trees and vehicles in urban scenes [45], categorization of natural scenes [31], keypoint detection and landmarking on human faces [10], and person re-identification [30]. It has not yet been used to evaluate point matches between overlapping 3D partial shapes, perhaps for several reasons.…”
Section: Boostingmentioning
confidence: 99%
“…The most common high-level scheme consists in gradually combining from scratch classifiers obtained at each iteration, with the objective to minimize throughout iterations a convex differentiable risk called a surrogate risk, sometimes amended with a structural part based on data [2]. Unlike so-called greedy algorithms, that repeatedly perform fine-grained optimization steps [2], boosting algorithms rely on weak optimization stages much less demanding from the statistical and computational standpoints [3], [4], [5], [6], [7]. In fact, the boosting theory involves at each iteration weak classifiers slightly different from pure random, but requires that the final combination be probably as close as required from the optimum, within polynomial time.…”
Section: Introductionmentioning
confidence: 99%
“…Nearest neighbors (NN) rules are a non-trivial field of choice for boosting algorithms [4], [5], as examples ideally play weak classifiers. In this case, we treat the boosting problem in its simplest form: the accurate leveraging of examples that vote among nearest neighbors.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations