2021
DOI: 10.48550/arxiv.2111.13626
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Hidden Markov Modeling over Graphs

Abstract: This work proposes a multi-agent filtering algorithm over graphs for finite-state hidden Markov models (HMMs), which can be used for sequential state estimation or for tracking opinion formation over dynamic social networks. We show that the difference from the optimal centralized Bayesian solution is asymptotically bounded for geometrically ergodic transition models. Experiments illustrate the theoretical findings and in particular, demonstrate the superior performance of the proposed algorithm compared to a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…Distributed inference can be broadly classified into three approaches based on the characteristics of the unknown variable: estimation for continuous variables [2], [3], [13]- [16], detection (hypothesis testing) for categorical variables [10], [17]- [33], and filtering for dynamic variables [34]- [39]. AA and GA are commonly used and the distinction between them is present in all approaches.…”
Section: A Distributed Inferencementioning
confidence: 99%
“…Distributed inference can be broadly classified into three approaches based on the characteristics of the unknown variable: estimation for continuous variables [2], [3], [13]- [16], detection (hypothesis testing) for categorical variables [10], [17]- [33], and filtering for dynamic variables [34]- [39]. AA and GA are commonly used and the distinction between them is present in all approaches.…”
Section: A Distributed Inferencementioning
confidence: 99%
“…Subsequently, the agents combine these approximate intermediate beliefs to update their beliefs as in [8], [9], [12], [13], [16]:…”
Section: A Social Learning Strategymentioning
confidence: 99%
“…Non-Bayesian social learning algorithms [6]- [13] are implemented in two steps: i) agents form their local beliefs, based on private observations; ii) agents combine their neighbors' beliefs using a weighted averaging scheme like consensus [14] or diffusion [15]. An implicit assumption common to these models is that agents are willing to share with neighbors their full belief vector.…”
Section: Introduction and Related Workmentioning
confidence: 99%