2020
DOI: 10.1109/access.2020.3003984
|View full text |Cite
|
Sign up to set email alerts
|

The Sequential Fusion Estimation Algorithms Based on Gauss-Newton Method Over Multi-Agent Networked Systems

Abstract: In multi-agent networked systems, parameter estimation problems arising in many practical applications are often required to solve Non-Linear Least Squares (NLLS) problems with the usual objective function (i.e., sum of squared residuals). The aim is to estimate a global parameter of interest across the network, such that the discrepancy between the estimation model and the real output of the system is minimized. There are challenges to face when applying the conventional Gauss-Newton method, such as non-coope… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 40 publications
(63 reference statements)
0
6
0
Order By: Relevance
“…where α represents the learning rate and is set to 0.35, as in [39][40][41]. Owing to the learning effect, lim t⟶∞ p i,j,t � 0 and p i,j,t < p * i,j,t ≤ 1 for all t * < t < ∞ and 0 < p i,j,t , in accordance with Equation (6). Note that 0 ≤ p i,j,t � p * i,j,t ≤ 1 for all t * and t if there is no learning effect.…”
Section: Learning Effectmentioning
confidence: 99%
See 2 more Smart Citations
“…where α represents the learning rate and is set to 0.35, as in [39][40][41]. Owing to the learning effect, lim t⟶∞ p i,j,t � 0 and p i,j,t < p * i,j,t ≤ 1 for all t * < t < ∞ and 0 < p i,j,t , in accordance with Equation (6). Note that 0 ≤ p i,j,t � p * i,j,t ≤ 1 for all t * and t if there is no learning effect.…”
Section: Learning Effectmentioning
confidence: 99%
“…e foregoing observation is useful, and, accordingly, we must focus on the nodes without the same neighbors. For example, nodes 1, 3, and 4 all have the same neighbors, that is, 6, 7}, and we can search for the state vectors and calculate the probability for node 1 because nodes 3 and 4 are identical.…”
Section: Experimental Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…For example in computer vision based on a distributed camera sensor network [27], many problems such as image alignment, face alignment, or camera calibration can be posed as solving such nonlinear optimization problem, where sensor nodes cooperate to minimize the overall residual between the source and target intensity images. In some decentralized network optimization tasks, for example power system state estimation in smart grids [28], [29] and fusion estimation in multi-agent systems [30], they are typically formulated as a NLLS problem across network.…”
Section: B Extension To Distributed Optimization Problemsmentioning
confidence: 99%
“…. , i r , and the probability of state transition in step k is [21]. (4) e state transition matrix is built.…”
Section: Markov Prediction Modelmentioning
confidence: 99%