2015
DOI: 10.1016/j.acha.2014.04.004
|View full text |Cite
|
Sign up to set email alerts
|

Multichannel deconvolution with long range dependence: Upper bounds on theLp-risk(1p<

Abstract: We consider multichannel deconvolution in a periodic setting with long-memory errors under three different scenarios for the convolution operators, i.e., super-smooth, regular-smooth and box-car convolutions. We investigate global performances of linear and hard-thresholded non-linear wavelet estimators for functions over a wide range of Besov spaces and for a variety of loss functions defining the risk. In particular, we obtain upper bounds on convergence rates using the L p -risk (1 ≤ p < ∞). Contrary to the… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

3
8
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 25 publications
3
8
0
Order By: Relevance
“…(iii) For δ = 0, our rates coincide, up to some logarithmic factor of ε ≍ n −1/2 , with the upper bounds obtained in Kulik et al (2015) in the regular-smooth convolution case.…”
Section: Estimation Algorithmsupporting
confidence: 80%
See 2 more Smart Citations
“…(iii) For δ = 0, our rates coincide, up to some logarithmic factor of ε ≍ n −1/2 , with the upper bounds obtained in Kulik et al (2015) in the regular-smooth convolution case.…”
Section: Estimation Algorithmsupporting
confidence: 80%
“…To determine the choices of J, m 0 and λ α j;ε,δ in ( 9) and (10), it is necessary to evaluate the variance of (7). Thus, recall that by (8), one has |m| ≍ 2 j , and define for some constant 0 < ρ < 1/2, the sets Ω 1 and Ω 2 as…”
Section: Estimation Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…More specifically, the stronger the LM is, the slower the convergence rates will be, compared to Benhaddou et al (2013). This detrimental effect of LM on convergence rates was pointed out in Wishart (2013), Kulik et al (2015), Benhaddou (2016) and Benhaddou (2017b). (v) Note that our rates of convergence are not directly comparable to those in Kulik et al (2015), since their rates pertain to an estimator of a one-dimensional function using a finite number of (M different) channels, while in the present work the convergence rates are associated with the estimation of a two-dimensional function and that the number of profiles M is asymptotic.…”
Section: Estimation Algorithmmentioning
confidence: 93%
“…Theorem 2 Let f (., .) be the wavelet estimator in (14), with λ(j 1 , j 2 ) given by (19) or (20) and, J 1 and J 2 given by (21). Let min{s 1 , s 2 } ≥ max{ 1 p , 1 2 }, and let conditions (3), ( 9), ( 10) and ( 25) hold.…”
Section: Estimation Algorithmmentioning
confidence: 99%