2021
DOI: 10.1214/21-ejs1800
|View full text |Cite
|
Sign up to set email alerts
|

Convergence analysis of a collapsed Gibbs sampler for Bayesian vector autoregressions

Abstract: We study the convergence properties of a collapsed Gibbs sampler for Bayesian vector autoregressions with predictors, or exogenous variables. The Markov chain generated by our algorithm is shown to be geometrically ergodic regardless of whether the number of observations in the underlying vector autoregression is small or large in comparison to the order and dimension of it. In a convergence complexity analysis, we also give conditions for when the geometric ergodicity is asymptotically stable as the number of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 45 publications
0
8
0
Order By: Relevance
“…The approximate sampling distribution of the Monte Carlo error is often available through a version of the central limit theorem (CLT), which holds under moment conditions on the functionals and Markov chain mixing conditions (Jones, ), both of which require theoretical study to verify in a given application. Jones and Hobert () give an accessible introduction to this theory which has been applied in a number of practically relevant settings (Ekvall & Jones, ; Hobert, Jones, Presnell, & Rosenthal, ; Johnson & Jones, ; Khare & Hobert, ; Lund & Tweedie, ; Roberts & Tweedie, ; Roy & Hobert, , ; Tan & Hobert, ; Vats, ).…”
Section: Estimation and Sampling Distributionsmentioning
confidence: 99%
“…The approximate sampling distribution of the Monte Carlo error is often available through a version of the central limit theorem (CLT), which holds under moment conditions on the functionals and Markov chain mixing conditions (Jones, ), both of which require theoretical study to verify in a given application. Jones and Hobert () give an accessible introduction to this theory which has been applied in a number of practically relevant settings (Ekvall & Jones, ; Hobert, Jones, Presnell, & Rosenthal, ; Johnson & Jones, ; Khare & Hobert, ; Lund & Tweedie, ; Roberts & Tweedie, ; Roy & Hobert, , ; Tan & Hobert, ; Vats, ).…”
Section: Estimation and Sampling Distributionsmentioning
confidence: 99%
“…At the same time, there has been significant recent interest in the convergence properties of Monte Carlo Markov chains in high-dimensional settings [8,11,15,20,37,41,53] and traditional approaches can have limitations in this regime [38]. This has led to an interest in considering the convergence of Monte Carlo Markov chains using Wasserstein distances [13,15,19,28,39,40] which may scale to large problem sizes where other approaches have had difficulties [7,15,40].…”
Section: Introductionmentioning
confidence: 99%
“…For example, the convergence rate of a random walk algorithm for logistic regression in one dimension has been studied in terms of the sample size [20]. Other related results concern the convergence properties of some high-dimensional Gibbs samplers [33,34] or the convergence properties of Gibbs samplers when the dimension or the sample size increase individually [11,40,41].…”
Section: Introductionmentioning
confidence: 99%
“…We will be interested in the posterior for both models (1) and (2) using independent normal and inverse-gamma priors on the parameters (A, α, β, σ 2 ). The independent prior choice is a popular choice in Bayesian regression models with and without measurement error [6,12,14,39]. For the EIV regression models (1) and ( 2), the independent priors are chosen…”
mentioning
confidence: 99%