1998
DOI: 10.1109/78.726817
|View full text |Cite
|
Sign up to set email alerts
|

Performance analysis of an adaptive algorithm for tracking dominant subspaces

Abstract: This paper provides a performance analysis of a least mean square (LMS) dominant invariant subspace algorithm. Based on an unconstrained minimization problem, this algorithm is a stochastic gradient algorithm driving the columns of a matrix W W W to an orthonormal basis of a dominant invariant subspace of a correlation matrix. We consider the stochastic algorithm governing the evolution of W W WW W W H to the projection matrix onto this dominant invariant subspace and study its asymptotic distribution. A close… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
19
0

Year Published

1998
1998
2014
2014

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(20 citation statements)
references
References 27 publications
1
19
0
Order By: Relevance
“…W * Fro = O(e −(λr−λr+1)t ). A performance analysis has been given in [24], [25]. This issue will be used as an example analysis of convergence and performance in Subsection VII-C2.…”
Section: A Subspace Power-based Methodsmentioning
confidence: 99%
“…W * Fro = O(e −(λr−λr+1)t ). A performance analysis has been given in [24], [25]. This issue will be used as an example analysis of convergence and performance in Subsection VII-C2.…”
Section: A Subspace Power-based Methodsmentioning
confidence: 99%
“…. Using (68) with (16) and 23 hkhl Re a H (θ l )Ua(θ k ) (a ′ H (θ k )Π x a ′ (θ l )) and compactly by…”
Section: Subspace-based Algorithmsmentioning
confidence: 99%
“…Note that these expressions have been derived in [80] by much more involved derivations based on the asymptotic distribution of the eigenvectors of the sample covariance matrix R x,N . Finally, note that if the sample orthogonal noise projector Π x,N is replaced by an adaptive estimator Π x,γ of Π x , where γ is the step-size of an arbitrary constant step-size recursive stochastic algorithm (see e.g., [16] and [17]), it has been proved in [16] that √ γ( θ γ − θ) converges in distribution to the zero-mean Gaussian distribution of covariance matrix given also by R MUSIC θ , where θ γ is an adaptive estimate of θ given by the MUSIC algorithm based on the specific adaptive estimate Π x,γ of Π x studied in [16].…”
Section: Subspace-based Algorithmsmentioning
confidence: 99%
“…Diffusions have frequently been used to seek global optimizers [2], [20] or to sample from given densities [19], [21], [27] in several applications. With the advent of high-speed desktop computing, such stochastic searches on high-dimensional spaces have become prominent.…”
Section: B Diffusion Process Onmentioning
confidence: 99%