2021
DOI: 10.3390/e23030313
|View full text |Cite
|
Sign up to set email alerts
|

PAC-Bayes Bounds on Variational Tempered Posteriors for Markov Models

Abstract: Datasets displaying temporal dependencies abound in science and engineering applications, with Markov models representing a simplified and popular view of the temporal dependence structure. In this paper, we consider Bayesian settings that place prior distributions over the parameters of the transition kernel of a Markov model, and seek to characterize the resulting, typically intractable, posterior distributions. We present a Probably Approximately Correct (PAC)-Bayesian analysis of variational Bayes (VB) app… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…More exponential moment inequalities (and moment inequalities) for dependent variables can be found in the paper [181] and in the book dedicated to weak dependence [61]. Other time series models where PAC-Bayes bounds were used include martingales [161], Markov chains [20], continuous dynamical systems [85], LTI systems [67]...…”
Section: Inequalities For Dependent Variablesmentioning
confidence: 99%
See 1 more Smart Citation
“…More exponential moment inequalities (and moment inequalities) for dependent variables can be found in the paper [181] and in the book dedicated to weak dependence [61]. Other time series models where PAC-Bayes bounds were used include martingales [161], Markov chains [20], continuous dynamical systems [85], LTI systems [67]...…”
Section: Inequalities For Dependent Variablesmentioning
confidence: 99%
“…More theoretical studies on variational inference (using PAC-Bayes, or not) appeared at the same time or since: [50,93,122,48,145,51,20,19,138,69].…”
Section: Variational Approximationsmentioning
confidence: 99%
“…In contrast, in this paper we deal with discrete-time systems with inputs and the learning takes place from a single time-series. In [29] learning of general Markov-chains was considered, but the state of the Markov-chain was assumed to be observable and no inputs were considered. The learning problem of [29] is thus different from the one considered in this paper.…”
Section: Introductionmentioning
confidence: 99%
“…In [29] learning of general Markov-chains was considered, but the state of the Markov-chain was assumed to be observable and no inputs were considered. The learning problem of [29] is thus different from the one considered in this paper.…”
Section: Introductionmentioning
confidence: 99%