2015
DOI: 10.2139/ssrn.2626507
|View full text |Cite
|
Sign up to set email alerts
|

L_1-Regularization of High-Dimensional Time-Series Models with Flexible Innovations

Abstract: We study the asymptotic properties of the Adaptive LASSO (adaLASSO) in sparse, high-dimensional, linear time-series models. We assume that both the number of covariates in the model and the number of candidate variables can increase with the sample size (polynomially or geometrically). In other words, we let the number of candidate variables to be larger than the number of observations. We show the adaLASSO consistently chooses the relevant variables as the number of observations increases (model selection con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 50 publications
0
4
0
Order By: Relevance
“…Medeiros and Mendes (2016) showed that the conditions that must be satisfied on the adaLASSO are very general. The model works even when the number of variables increases faster than the number of observations and when the errors are non-Gaussian and heteroskedastic.…”
Section: Lasso and Adaptive-lassomentioning
confidence: 99%
“…Medeiros and Mendes (2016) showed that the conditions that must be satisfied on the adaLASSO are very general. The model works even when the number of variables increases faster than the number of observations and when the errors are non-Gaussian and heteroskedastic.…”
Section: Lasso and Adaptive-lassomentioning
confidence: 99%
“…In general, is set to unity and b j is estimated in the …rst-step using LASSO. According to Medeiros and Mendes (2016), the conditions required by the adalasso estimator are very general and the model works even when the errors are non-Gaussian, heteroskedastic and the number of variables increases faster than the number of observations. Model 10 (Ridge regression): It is well known OLS often does poorly in prediction on future data (e.g., due to over…tting).…”
Section: Model 8 (Lasso)mentioning
confidence: 99%
“…framework may be complicated. Moreover, Medeiros and Mendes (2015) showed that selecting the parameters λ and τ in a time-series environment by using the BIC and the LASSO as the first step for the adaLASSO yields estimates that have the oracle property even in adverse situations with heteroskedasticity and t distributed errors. Moreover, the authors allow the number of candidate variables to increase with the number of observations and show that, based on these conditions, the adaLASSO has model selection consistency, i.e., it chooses the most parsimonious model asymptotically.…”
Section: Lasso Modelsmentioning
confidence: 99%