2007
DOI: 10.1109/acssc.2007.4487280
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic Incremental Gradient Descent for Estimation in Sensor Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
41
0

Year Published

2010
2010
2018
2018

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 46 publications
(42 citation statements)
references
References 10 publications
1
41
0
Order By: Relevance
“…Diffusion Kalman filtering and smoothing algorithms were also proposed [10], [11], [32]. Distributed estimation algorithms based on incremental [3], [5], [12] and consensus strategies [13]- [15] have also been proposed. The work [14] proposes a distributed LMS algorithm based on consensus techniques that relies on node hierarchy to reduce communications.…”
Section: Introductionmentioning
confidence: 98%
“…Diffusion Kalman filtering and smoothing algorithms were also proposed [10], [11], [32]. Distributed estimation algorithms based on incremental [3], [5], [12] and consensus strategies [13]- [15] have also been proposed. The work [14] proposes a distributed LMS algorithm based on consensus techniques that relies on node hierarchy to reduce communications.…”
Section: Introductionmentioning
confidence: 98%
“…Minimizing a sum of convex functions, where each component is known only to a particular node, has attracted much attention recently, due to its simple formulation and wide applications [22], [20], [21], [26], [27], [23], [31], [30], [28], [29], [32], [25], [24]. The key idea is that properly designed distributed control protocols or computation algorithms can lead to a collective optimization, based on simple exchanged information and individual optimum observation.…”
Section: Introductionmentioning
confidence: 99%
“…The key idea is that properly designed distributed control protocols or computation algorithms can lead to a collective optimization, based on simple exchanged information and individual optimum observation. Subgradient-based incremental methods were established via deterministic or randomized iteration, where each node is assumed to be able to compute a local subgradient value of its objective function [20], [21], [26], [22], [25], [24]. Non-subgradientbased methods also showed up in the literature.…”
Section: Introductionmentioning
confidence: 99%
“…For example, the ring-network structure has been considered [86][87][88] for in-network information processing, where the sensors are organized in a cycle and process the information by passing the estimates along the cycle. Consensus-based approaches for distributed estimation have been discussed [14,[89][90][91][92].…”
Section: (H) Distributed Parameter Estimationmentioning
confidence: 99%