2021
DOI: 10.48550/arxiv.2107.08498
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Decoupling Shrinkage and Selection for the Bayesian Quantile Regression

Abstract: This paper extends the idea of decoupling shrinkage and sparsity for continuous priors to Bayesian Quantile Regression (BQR). The procedure follows two steps: In the first step, we shrink the quantile regression posterior through state of the art continuous priors and in the second step, we sparsify the posterior through an efficient variant of the adaptive lasso, the signal adaptive variable selection (SAVS) algorithm. We propose a new variant of the SAVS which automates the choice of penalisation through qua… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 55 publications
(110 reference statements)
0
5
0
Order By: Relevance
“…For the large datasets we consider in this paper, M + K T and thus suitable shrinkage priors are necessary to obtain precise inference. Kohns and Szendrei (2021) and Mitchell et al (ming) use flexible shrinkage priors in large-scale QRs and show that these work well for tail forecasting.…”
Section: Priors For the Quantile Regression Coefficientsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the large datasets we consider in this paper, M + K T and thus suitable shrinkage priors are necessary to obtain precise inference. Kohns and Szendrei (2021) and Mitchell et al (ming) use flexible shrinkage priors in large-scale QRs and show that these work well for tail forecasting.…”
Section: Priors For the Quantile Regression Coefficientsmentioning
confidence: 99%
“…A recent exception isKohns and Szendrei (2021) who estimate large-scale quantile regressions and then apply ex-post sparsification to sharpen predictive inference.…”
mentioning
confidence: 99%
“…Conditional on the quantile, this model resembles a generalized additive model (GAM); see Hastie and Tibshirani (1987). This specification differs from much of the literature (e.g., Adrian et al, 2019;Carriero et al, 2022;Kohns & Szendrei, 2021;Mitchell et al, 2022) which sets g 𝔮 (x t ) = 0 for all t. We approximate g 𝔮 (x t ) using nonlinear transformations of x t :…”
Section: The Likelihood Functionmentioning
confidence: 99%
“…For the large datasets we consider in this paper, M + K ≫ T and thus suitable shrinkage priors are necessary to obtain precise inference. Kohns and Szendrei (2021) and Mitchell et al (2022) use flexible shrinkage priors in large-scale QRs and show that these work well for tail forecasting. We build on their findings by considering a range of different priors on 𝜷 𝔮 and 𝜸 𝔮 .…”
Section: Priors For the Quantile Regression Coefficientsmentioning
confidence: 99%
See 1 more Smart Citation