2020
DOI: 10.1214/20-ecp333
|View full text |Cite
|
Sign up to set email alerts
|

Exponential filter stability via Dobrushin’s coefficient

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 14 publications
0
9
0
Order By: Relevance
“…For a comprehensive review on filter stability in the control-free case, see ( [24]). In the controlled case, recent studies include ( [77,78]).…”
Section: Methods I: Belief-space Quantization Based On Finite-memory 175mentioning
confidence: 99%
“…For a comprehensive review on filter stability in the control-free case, see ( [24]). In the controlled case, recent studies include ( [77,78]).…”
Section: Methods I: Belief-space Quantization Based On Finite-memory 175mentioning
confidence: 99%
“…Examples of the latter type include algorithms employing various pruning heuristics [Mon82, CLZ97, HF00a, Hau00, RG02, TK03, PB04, SV05, PGT06, RPPCD08, SV10, SS12, SYHL13, GHL19] and algorithms which optimize over restricted classes of policies [Han98, MKKC99, KMN99, LYX11, AYA18]. To our knowledge, some of the only works presenting subexponential time approximate planning algorithms are [BDRS96] and [MY20,KY20] (ignoring end-to-end learning algorithms, which we discuss later).…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, this property plays a key role in applications, in terms of bounding the effect of misspecification and analyzing the asymptotic [MS92, SM94] and non-asymptotic [CKMY21] robustness. For finite, unstructured HMMs (our setting), there are asymptotic stability results under analogues of observability [VH09a], and exponential stability results under mixing assumptions [SAD98,BK98,MY20]. In this context, Theorem 1.3 is the first exponential stability result for unstructured HMMs without mixing assumptions.…”
Section: Introductionmentioning
confidence: 99%
“…U n which are obtained under π * . Proposition 1 implies that, if the underlying Markov chain is ergodic in the sense of Condition 2, different priors are forgotten at a geometric rate in n, similar to the case of HMCs [39,38,40]. In the case of finite-state POMDPs, for tabular Q-learning, a different characterization of the inference error was presented in [25] under an assumption on the Dobrushin coefficient.…”
Section: Condition 2 (Minorization-majorization)mentioning
confidence: 99%