2016
DOI: 10.1109/tsp.2016.2543209
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Inference Over Directed Networks: Performance Limits and Optimal Design

Abstract: We find large deviations rates for consensus-based distributed inference for directed networks. When the topology is deterministic, we establish the large deviations principle and find exactly the corresponding rate function, equal at all nodes. We show that the dependence of the rate function on the stochastic weight matrix associated with the network is fully captured by its left eigenvector corresponding to the unit eigenvalue. Further, when the sensors' observations are Gaussian, the rate function admits a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 37 publications
0
5
0
Order By: Relevance
“…This is because, in the adaptive social learning problem, the agents want to learn an optimal decision from the received data, rather than a specific parameter. From (24), the performance of learning is determined by the distribution of x ave,i (θ), which captures the information of local observations across the network. When the signal model is accurate, the log-likelihood ratio x k,i (θ) provides the full information of each piece of independent observation ξ k,i for the decision-making task.…”
Section: Interesting Casesmentioning
confidence: 99%
See 1 more Smart Citation
“…This is because, in the adaptive social learning problem, the agents want to learn an optimal decision from the received data, rather than a specific parameter. From (24), the performance of learning is determined by the distribution of x ave,i (θ), which captures the information of local observations across the network. When the signal model is accurate, the log-likelihood ratio x k,i (θ) provides the full information of each piece of independent observation ξ k,i for the decision-making task.…”
Section: Interesting Casesmentioning
confidence: 99%
“…It was shown in [21] that the diffusion-based detection strategy achieves a better adaptation ability than the consensus-based counterpart. The learning performance of both strategies, i.e., the learning speed and the steady-state error probability were shown to be dependent on the combination policy employed by the network [19]- [22], [24].…”
Section: Introductionmentioning
confidence: 99%
“…For future reference, we also introduce the mean Laplacian matrix {L(t)} as L(t) = E [L(t)], and L(t) = L(t) − L(t). Thus, it holds that E L(t) = 0, and E L(t) 2…”
Section: Credo − N L: a Communication Efficient Distributed Wnls Estimentioning
confidence: 99%
“…1 From now on, in order to better distinguish the MSE rate of decay with respect to the number of iterations t and with respect to the number of per-node communications, we will refer to the former as the MSE iteration-wise rate and to the latter as the MSE communication rate. 2 The stronger requirement imposed here, with 1 being strictly positive, is only required for the benchmark estimator in Eqs. (13)- (14) ahead to be defined properly; the reason for this requirement is the two time scale nature [43], and the red line represents the proposed estimator of the benchmark estimator (13)- (14).…”
Section: Endnotesmentioning
confidence: 99%
See 1 more Smart Citation