2015
DOI: 10.3150/14-bej633
|View full text |Cite
|
Sign up to set email alerts
|

Pointwise adaptive estimation of a multivariate density under independence hypothesis

Abstract: In this paper, we study the problem of pointwise estimation of a multivariate density. We provide a data-driven selection rule from the family of kernel estimators and derive for it a pointwise oracle inequality. Using the latter bound, we show that the proposed estimator is minimax and minimax adaptive over the scale of anisotropic Nikolskii classes. It is important to emphasize that our estimation method adjusts automatically to eventual independence structure of the underlying density. This, in its turn, al… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
22
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 23 publications
(22 citation statements)
references
References 37 publications
0
22
0
Order By: Relevance
“…It turns out that the convergence rate of our estimators coincides with the optimal convergence rate for pointwise estimation [2]. Furthermore, our theorem reduces to the corresponding results of Rebelles [14] when ω(y) ≡ 1 and the sample is independent.…”
Section: Introductionmentioning
confidence: 56%
“…It turns out that the convergence rate of our estimators coincides with the optimal convergence rate for pointwise estimation [2]. Furthermore, our theorem reduces to the corresponding results of Rebelles [14] when ω(y) ≡ 1 and the sample is independent.…”
Section: Introductionmentioning
confidence: 56%
“…This partially generalizes previous results obtained in i.i.d. case by Butucea (2000) and Rebelles (2015). To the best of our knowledge, this is the first adaptive result for pointwise density estimation in the context of dependent data.…”
Section: Introductionmentioning
confidence: 80%
“…Indeed, the assumption that h belongs to H β ([0, a + ε], L) permits to state that f Y belongs to H β+1 ([a − ε ′ , a + ε ′ ], L ′ ) for an L ′ > 0 and a ε ′ ∈]0, a[ (see Proposition 6.1). This smoothness result ensures that the kernel estimate f Y has a pointwise risk upper-bounded by C(n/ ln(n)) −2(β+1)/(2β+3) (C a constant), for a "handchosen" bandwidth w ≍ n −1/(2β+1) , see Rebelles (2015) for example. Thus, as D m is set to 2β+3) .…”
Section: Upper-bound On the Risk Of The Estimatormentioning
confidence: 98%
“…We also choose w in the definition (10) with a Goldenshluger-Lepski type method over a collection of possible bandwidths larger than 1/ √ n, as described in Comte (2015) or Rebelles (2015). This permits to derive the following convergence rate.…”
Section: Upper-bound On the Risk Of The Estimatormentioning
confidence: 99%