1998
DOI: 10.1002/(sici)1099-1115(199811)12:7<579::aid-acs519>3.0.co;2-h
|View full text |Cite
|
Sign up to set email alerts
|

Exponential convergence of products of random matrices: application to adaptive algorithms

Abstract: We introduce a novel methodology for analysing well known classes of adaptive algorithms. Combining recent developments concerning geometric ergodicity of stationary Markov processes and long existing results from the theory of Perturbations of Linear Operators we first study the behaviour and convergence properties of a class of products of random matrices, this is turn allows for the analysis of the first and second order statistics of adaptive algorithms without the need of any restrictive conditions impose… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

8
29
0
4

Year Published

1998
1998
2014
2014

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(41 citation statements)
references
References 14 publications
8
29
0
4
Order By: Relevance
“…The mean square stability for MJLS with a countably infinite Markov chain was studied in [12], using an operator theoretic point of view. Some problems related to the mean square stability of MJLS with general state space were studied in [13], [14] under some ergodicity assumptions. The relation among several notions of stochastic stability for MJLS was studied in [11].…”
Section: Introductionmentioning
confidence: 99%
“…The mean square stability for MJLS with a countably infinite Markov chain was studied in [12], using an operator theoretic point of view. Some problems related to the mean square stability of MJLS with general state space were studied in [13], [14] under some ergodicity assumptions. The relation among several notions of stochastic stability for MJLS was studied in [11].…”
Section: Introductionmentioning
confidence: 99%
“…Grande parte dos filtros adaptativos usados atualmente pode ser descrita pela recursão [28,29] W (n + 1) = W (n) + µf (e(n)) Z(n), em que µé uma constante conhecida como passo de adaptação, f (•)é uma função conhecida, e Z(n)é um vetor que depende das seqüências d(k) Uma variante muito utilizadaé o LMS normalizado (ou NLMS), obtido com o mesmo f (a) = a e com Z(n) = X(n)/( + X(n) 2 ) ( a é a norma euclideana do vetor a, e é uma constante não-negativa).…”
Section: Filtros Adaptativosunclassified
“…Outro algoritmo de grande interesse, o recursive least squares (RLS) também pode ser obtido da expressão acima com escolhas convenientes de f (•) e de Z(n) [29].d (n) = W (n) T X(n), e(n) = d(n) −d(n), W (n + 1) = W (n) + λ −1 P (n)X(n) 1 + λ −1 X(n) T P (n)X(n) e(n), P (n + 1) = λ −1 P (n) − λ −2 P (n)X(n)X(n) T P (n) 1 + λ −1 X(n) T P (n)X(n) . No caso geral, o passo de adaptação µé um parâmetro de projeto do filtro.…”
Section: (12)unclassified
See 2 more Smart Citations