We show that two polynomial time methods, a Lasso estimator with adaptively chosen tuning parameter and a Slope estimator, adaptively achieve the minimax prediction and ℓ 2 estimation rate (s/n) log(p/s) in high-dimensional linear regression on the class of s-sparse vectors in R p . This is done under the Restricted Eigenvalue (RE) condition for the Lasso and under a slightly more constraining assumption on the design for the Slope. The main results have the form of sharp oracle inequalities accounting for the model misspecification error. The minimax optimal bounds are also obtained for the ℓq estimation errors with 1 ≤ q ≤ 2 when the model is well-specified. The results are non-asymptotic, and hold both in probability and in expectation. The assumptions that we impose on the design are satisfied with high probability for a large class of random matrices with independent and possibly anisotropically distributed rows. We give a comparative analysis of conditions, under which oracle bounds for the Lasso and Slope estimators can be obtained. In particular, we show that several known conditions, such as the RE condition and the sparse eigenvalue condition are equivalent if the ℓ 2 -norms of regressors are uniformly bounded.MSC 2010 subject classifications: Primary 60K35, 62G08; secondary 62C20, 62G05, 62G20.
For a convex class of functions F , a regularization functions Ψ(·) and given the random data, we study estimation properties of regularization procedures of the form
We prove that iid random vectors that satisfy a rather weak moment assumption can be used as measurement vectors in Compressed Sensing, and the number of measurements required for exact reconstruction is the same as the best possible estimate -exhibited by a random Gaussian matrix. We then show that this moment condition is necessary, up to a log log factor. In addition, we explore the Compatibility Condition and the Restricted Eigenvalue Condition in the noisy setup, as well as properties of neighbourly random polytopes.
We obtain estimation error rates and sharp oracle inequalities for regularization procedures of the formfwhen · is any norm, F is a convex class of functions and is a Lipschitz loss function satisfying a Bernstein condition over F . We explore both the bounded and subgaussian stochastic frameworks for the distribution of the f (X i )'s, with no assumption on the distribution of the Y i 's. The general results rely on two main objects: a complexity function, and a sparsity equation, that depend on the specific setting in hand (loss and norm · ).As a proof of concept, we obtain minimax rates of convergence in the following problems: 1) matrix completion with any Lipschitz loss function, including the hinge and logistic loss for the so-called 1-bit matrix completion instance of the problem, and quantile losses for the general case, which enables to estimate any quantile on the entries of the matrix; 2) logistic LASSO and variants such as the logistic SLOPE; 3) kernel methods, where the loss is the hinge loss, and the regularization function is the RKHS norm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.