2005
DOI: 10.1007/s11222-005-6203-8
|View full text |Cite
|
Sign up to set email alerts
|

Kernel density classification and boosting: an L2 analysis

Abstract: University of LeedsAbstract. Kernel density estimation is a commonly used approach to classification. However, most of the theoretical results for kernel methods apply to estimation per se and not necessarily to classification. In this paper we show that when estimating the difference between two densities, the optimal smoothing parameters are increasing functions of the sample size of the complementary group, and we provide a small simluation study which examines the relative performance of kernel density met… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
9
0

Year Published

2005
2005
2013
2013

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(10 citation statements)
references
References 23 publications
1
9
0
Order By: Relevance
“…This conclusion is consistent with that found by Di Marzio and Taylor [5,6] , where boosting kernels gives higher order bias for both density estimation and classification. However, note that the current result uses L 2 Boosting for regression, rather than the Adaboost-like algorithms used in classification and density estimation.…”
Section: Boostnw Reduces the Bias Of The N-w Estimatorsupporting
confidence: 92%
See 1 more Smart Citation
“…This conclusion is consistent with that found by Di Marzio and Taylor [5,6] , where boosting kernels gives higher order bias for both density estimation and classification. However, note that the current result uses L 2 Boosting for regression, rather than the Adaboost-like algorithms used in classification and density estimation.…”
Section: Boostnw Reduces the Bias Of The N-w Estimatorsupporting
confidence: 92%
“…In all three domains, methods exist which make use of a kernel function (kernel density estimation, kernel classifiers and kernel regression); these are often referred to as simply "nonparametric". Making use of these kernel methods, Di Marzio and Taylor [5][6][7] have indicated how boosting derives its success: namely, by reducing the bias of the estimators, with only moderate increases in variance. Using this result, one is able to use larger smoothing parameters and improve the overall quality of the final estimate.…”
Section: Introductionmentioning
confidence: 99%
“…The central idea of the KDE method is to apply kernel density estimation [8] on the BMD-list to estimate the probability density functions of the target class and the non-target classes, and then set the distance threshold δ so that at every point in the range of [0, δ] the probability density of belonging to the target class passes a probability threshold.…”
Section: Kde: Learning Distance Thresholds Using Kernel Density Estimmentioning
confidence: 99%
“…where K is a kernel function and h is a smoothing factor [8]. In this paper, we adopt the Gaussian kernel which has been popularly used.…”
Section: Kde: Learning Distance Thresholds Using Kernel Density Estimmentioning
confidence: 99%
“…Classification is performed with supervised non-parametric KDA [14,15]. The KDA is binary classifier, it simply give a `yes` or `no` answer to indicate whether a particular sample belong to a particular class or not.…”
Section: Classification Backgroundmentioning
confidence: 99%