Proposition 8.1. Assume M is infinitesimally linear. For any vector field X : M × D → M on M , we have (for any m ∈ M) ∀(d 1 , d 2) ∈ D(2) : X(X(m, d 1), d 2) = X(m, d 1 + d 2) (8.4) Proof. Note that the right hand side makes sense, since d 1 + d 2 ∈ D for (d 1 , d 2) ∈ D(2). Both sides in the equation may be viewed as functions
Abstract. This paper establishes non-asymptotic oracle inequalities for the prediction error and estimation accuracy of the LASSO in stationary vector autoregressive models. These inequalities are used to establish consistency of the LASSO even when the number of parameters is of a much larger order of magnitude than the sample size. We also give conditions under which no relevant variables are excluded.Next, non-asymptotic probabilities are given for the adaptive LASSO to select the correct sparsity pattern. We then give conditions under which the adaptive LASSO reveals the correct sparsity pattern asymptotically. We establish that the estimates of the non-zero coefficients are asymptotically equivalent to the oracle assisted least squares estimator. This is used to show that the rate of convergence of the estimates of the non-zero coefficients is identical to the one of least squares only including the relevant covariates.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.