2021
DOI: 10.1214/21-ejs1913
|View full text |Cite
|
Sign up to set email alerts
|

Rate of estimation for the stationary distribution of jump-processes over anisotropic Holder classes

Abstract: We study the problem of the non-parametric estimation for the density π of the stationary distribution of the multivariate stochastic differential equation with jumps (Xt) 0≤t≤T , when the dimension d is such that d ≥ 3. From the continuous observation of the sampling path on [0, T ] we show that, under anisotropic Hölder smoothness constraints, kernel based estimators can achieve fast convergence rates. In particular, they are as fast as the ones found by Dalalyan and Reiss [11] for the estimation of the inv… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
23
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

3
3

Authors

Journals

citations
Cited by 8 publications
(23 citation statements)
references
References 32 publications
0
23
0
Order By: Relevance
“…The proof of Theorem 3 follows along the same lines as that of Theorem 2 in [3], where a lower bound for the kernel estimator of the invariant density for the solution to (8) for d ≥ 3 is obtained. The proof is based on the two hypotheses method, explained for example in Section 2.3 of [37].…”
Section: Model Assumption and Main Resultsmentioning
confidence: 95%
See 4 more Smart Citations
“…The proof of Theorem 3 follows along the same lines as that of Theorem 2 in [3], where a lower bound for the kernel estimator of the invariant density for the solution to (8) for d ≥ 3 is obtained. The proof is based on the two hypotheses method, explained for example in Section 2.3 of [37].…”
Section: Model Assumption and Main Resultsmentioning
confidence: 95%
“…Instead, we use the Kullback's version of the finite number of hypotheses method as stated in Lemma C.1 of [36], see Lemma 2 below. Observe that this method gives a slightly weaker lower bound as we get a sup x inside the expectation, while the method in [3] provides an inf x outside the expectation.…”
Section: Model Assumption and Main Resultsmentioning
confidence: 99%
See 3 more Smart Citations