2020
DOI: 10.48550/arxiv.2012.05432
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Low-rank matrix estimation in multi-response regression with measurement errors: Statistical and computational guarantees

Abstract: In this paper, we investigate the matrix estimation problem in multi-response regression with measurement errors. A nonconvex error-corrected estimator is proposed to estimate the matrix parameter via a combination of the loss function and the nuclear norm regularization. Then under the low-rank constraint, we analyse the statistical and computational theoretical properties of global solution of the nonconvex regularized estimator from a general point. In the statistical aspect, we establish the recovery bound… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

2
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 31 publications
2
2
0
Order By: Relevance
“…Then we establish lower bounds on the minimax loss function in terms of the squared Frobenius norm for a certain class of lowrank matrices. This lower bound agrees with the upper bound given before in our another work [25] up to constant factors, implying the optimality of the proposed estimator therein. Moreover, the minimax lower bound recaptures the rate provided under the clean covariate assumption in previous literatures [12,14,15], a result that further indicates that though in the more realistic errors-in-variables situation, no more samples are required so as to achieve a rate-optimal estimation.…”
Section: Introductionsupporting
confidence: 91%
See 3 more Smart Citations
“…Then we establish lower bounds on the minimax loss function in terms of the squared Frobenius norm for a certain class of lowrank matrices. This lower bound agrees with the upper bound given before in our another work [25] up to constant factors, implying the optimality of the proposed estimator therein. Moreover, the minimax lower bound recaptures the rate provided under the clean covariate assumption in previous literatures [12,14,15], a result that further indicates that though in the more realistic errors-in-variables situation, no more samples are required so as to achieve a rate-optimal estimation.…”
Section: Introductionsupporting
confidence: 91%
“…Theorem 1 tells us that in the additive noise case, with high probability, about max{d 1 , d 2 }R 0 number of samples are required to estimate a d 1 × d 2 matrix of rank R 0 consistently by any method. Note that the lower bound(10) agrees with the upper bound obtained in our another work[25, Theorem 1] when λ = Ω…”
supporting
confidence: 89%
See 2 more Smart Citations