2020
DOI: 10.48550/arxiv.2008.10230
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Unified Bayesian theory of sparse linear regression with nuisance parameters

Abstract: We study frequentist asymptotic properties of Bayesian procedures for high-dimensional Gaussian sparse regression when unknown nuisance parameters are involved. Nuisance parameters can be finite-, high-, or infinite-dimensional. A mixture of point masses at zero and continuous distributions is used for the prior distribution on sparse regression coefficients, and appropriate prior distributions are used for nuisance parameters. The optimal posterior contraction of sparse regression coefficients, hampered by th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
4

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 30 publications
0
5
0
Order By: Relevance
“…Expanding log(e k,n ) in the powers of (1 − e k,n ) and using Lemma 1 of Jeong and Ghosal (2020), (e…”
Section: Discussionmentioning
confidence: 99%
“…Expanding log(e k,n ) in the powers of (1 − e k,n ) and using Lemma 1 of Jeong and Ghosal (2020), (e…”
Section: Discussionmentioning
confidence: 99%
“…F }, which they derived by applying the general theory of posterior contraction in terms of the Réyni divergence as in Ning et al [92]. Further, extending the distributional approximation technique, Jeong and Ghosal [60] also showed selection consistency. Their setup accommodates a variety of practical extensions of the basic linear model including multidimensional response, partially sparse regression, multiple responses with missing observations, (multivariate) measurement errors, parametric correlation structure, mixed-effect models, graphical structure in precision matrix (see Subsection 5.2), nonparametric heteroscedastic regression and partial linear models.…”
Section: Linear Regression With Hard-spike-and-laplace Slabmentioning
confidence: 96%
“…) independently with possibly varying dimension m i and covariance matrices ∆ η,i , allowing a nuisance parameter η and an additional term ξ η,i to incorporate various departure from the simple linear model Y i = X i β + ε i , was considered in Jeong and Ghosal [60]. They showed optimal recovery for the regression coefficient β under standard conditions on compatibility numbers.…”
Section: Linear Regression With Hard-spike-and-laplace Slabmentioning
confidence: 99%
“…The spike and slab prior is one of the most popular priors for Bayesian high-dimensional analysis and has been extensively studied; some important works include Johnstone and Silverman (2004); Ročková and George (2018); and Castillo and Szabó (2020). The subset selection prior, another popular prior, including the spike and slab prior as a special case, has also been studied by Castillo and van der Vaart (2012); Castillo et al (2015); Martin et al (2017); Jeong and Ghosal (2020); and Ning et al (2020). See the review paper on this topic by Banerjee et al (2021).…”
Section: Introductionmentioning
confidence: 99%