2013
DOI: 10.1186/1687-6180-2013-135
|View full text |Cite
|
Sign up to set email alerts
|

A variable step-size strategy for distributed estimation over adaptive networks

Abstract: A lot of work has been done recently to develop algorithms that utilize the distributed structure of an ad hoc wireless sensor network to estimate a certain parameter of interest. One such algorithm is called diffusion least-mean squares (DLMS). This algorithm estimates the parameter of interest using the cooperation between neighboring sensors within the network. The present work proposes an improvement on the DLMS algorithm by using a variable step-size LMS (VSSLMS) algorithm. In this work, first, the well-k… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
50
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 45 publications
(50 citation statements)
references
References 23 publications
0
50
0
Order By: Relevance
“…Again, the received block data matrix can be written as (5). Taking the autocorrelation of D k,N and assuming the input data regressors to be white Gaussian with variance σ 2 s, k , we get…”
Section: Cholesky Factorization-based Blind Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…Again, the received block data matrix can be written as (5). Taking the autocorrelation of D k,N and assuming the input data regressors to be white Gaussian with variance σ 2 s, k , we get…”
Section: Cholesky Factorization-based Blind Algorithmmentioning
confidence: 99%
“…The work in [1] introduces a distributed estimation approach using the recursive least squares algorithm. Other algorithms involving the least-mean-square (LMS) approach have also been suggested [2][3][4][5].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Motivated by the VSS mechanism [13,14] for the diffusion LMS algorithm, the low-complexity VFF mechanisms are designed such that smaller forgetting factors are employed when the estimation errors are large in order to obtain a faster convergence speed, whereas the forgetting factor increases when the estimation errors become small so as to yield better steady-state performance. Based on the above idea, an effective rule to adapt the forgetting factor can be formulated as…”
Section: Ltvff Mechanismmentioning
confidence: 99%
“…Note that we consider the diffusion cooperation strategy in this paper since the incremental strategy requires the definition of a path through the *Correspondence: ylcai@zju.edu.cn 1 College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, People's Republic of China Full list of author information is available at the end of the article network and may be not suitable for large networks or dynamic configurations [6,7]. Many distributed estimation algorithms with the diffusion strategy have been put forward recently, such as diffusion least-mean squares (LMS) [8,9], diffusion sparse LMS [10][11][12], variable step size diffusion LMS (VSS-DLMS) [13,14], diffusion recursive least squares (RLS) [6,7], distributed sparse RLS [15], distributed sparse total least squares (LS) [16], diffusion information theoretic learning (ITL) [17], and the diffusion-based algorithm for distributed censor regression [18]. Among assorted distributed estimation algorithms, the RLS-based algorithms achieve superior performance to the LMS-based ones by inheriting the advantages of fast convergence and low steady-state misadjustment from the RLS technique.…”
Section: Introductionmentioning
confidence: 99%