2021
DOI: 10.3390/e23060722
|View full text |Cite
|
Sign up to set email alerts
|

Minimax Rates of ℓp-Losses for High-Dimensional Linear Errors-in-Variables Models over ℓq-Balls

Abstract: In this paper, the high-dimensional linear regression model is considered, where the covariates are measured with additive noise. Different from most of the other methods, which are based on the assumption that the true covariates are fully obtained, results in this paper only require that the corrupted covariate matrix is observed. Then, by the application of information theory, the minimax rates of convergence for estimation are investigated in terms of the ℓp(1≤p<∞)-losses under the general sparsity assu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…Nghiem and Potgieter [75] introduced a new estimation method called simulation-selection-extrapolation (SIMSE-LEX), which used Lasso in the simulation step and group Lasso in the selection step. Li and Wu [76] established minimax convergence rates for the estimation of regression coefficients under a more general situation. Bai et al [77] proposed a variable selection method for ultrahigh-dimensional linear quantile regression models with measurement errors.…”
Section: Introductionmentioning
confidence: 99%
“…Nghiem and Potgieter [75] introduced a new estimation method called simulation-selection-extrapolation (SIMSE-LEX), which used Lasso in the simulation step and group Lasso in the selection step. Li and Wu [76] established minimax convergence rates for the estimation of regression coefficients under a more general situation. Bai et al [77] proposed a variable selection method for ultrahigh-dimensional linear quantile regression models with measurement errors.…”
Section: Introductionmentioning
confidence: 99%
“…Recently, researchers began to devote to errors-in-variables regression problems and most of the results were established for statistical inference of linear or generalized linear regression; see, e.g., [17][18][19][20] and references therein. In the information-theoretic aspect, Loh and Wainwright [21] and Li and Wu [22] considered linear errors-in-variables regression and established the minimax lower bound for estimating a sparse vector via calculating the corresponding KL divergence over certain sparse sets for vectors, respectively.…”
Section: Introductionmentioning
confidence: 99%