2001 IEEE Workshop on Signal Processing Systems. SiPS 2001. Design and Implementation (Cat. No.01TH8578)
DOI: 10.1109/sips.2001.957335
|View full text |Cite
|
Sign up to set email alerts
|

Tracking performance of leakage LMS for chirped signals

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 6 publications
0
3
0
Order By: Relevance
“…This inequality confirms that the momentum algorithm can achieve a faster rate in deterministic optimization and, moreover, this faster rate cannot be attained by standard gradient descent. Motivated by these useful acceleration properties in the deterministic context, momentum terms have been subsequently introduced into stochastic optimization algorithms as well (Polyak, 1987;Proakis, 1974;Sharma et al, 1998;Shynk and Roy, June 1988;Roy and Shynk, 1990;Tugay and Tanik, 1989;Bellanger, 2001;Wiegerinck et al, 1994;Hu et al, 2009;Xiao, 2010;Lan, 2012;Ghadimi and Lan, 2012;Zhong and Kwok, 2014) and applied, for example, to problems involving the tracking of chirped sinusoidal signals (Ting et al, 2000) or deep learning (Sutskever et al, 2013;Kahou et al, 2013;Szegedy et al, 2015;Zareba et al, 2015). However, the analysis in this paper will show that the advantages of the momentum technique for deterministic optimization do not necessarily carry over to the adaptive online setting due to the presence of stochastic gradient noise (which is the difference between the actual gradient vector and its approximation).…”
Section: Acceleration Methodsmentioning
confidence: 99%
“…This inequality confirms that the momentum algorithm can achieve a faster rate in deterministic optimization and, moreover, this faster rate cannot be attained by standard gradient descent. Motivated by these useful acceleration properties in the deterministic context, momentum terms have been subsequently introduced into stochastic optimization algorithms as well (Polyak, 1987;Proakis, 1974;Sharma et al, 1998;Shynk and Roy, June 1988;Roy and Shynk, 1990;Tugay and Tanik, 1989;Bellanger, 2001;Wiegerinck et al, 1994;Hu et al, 2009;Xiao, 2010;Lan, 2012;Ghadimi and Lan, 2012;Zhong and Kwok, 2014) and applied, for example, to problems involving the tracking of chirped sinusoidal signals (Ting et al, 2000) or deep learning (Sutskever et al, 2013;Kahou et al, 2013;Szegedy et al, 2015;Zareba et al, 2015). However, the analysis in this paper will show that the advantages of the momentum technique for deterministic optimization do not necessarily carry over to the adaptive online setting due to the presence of stochastic gradient noise (which is the difference between the actual gradient vector and its approximation).…”
Section: Acceleration Methodsmentioning
confidence: 99%
“…This is why the improvement percentage of MSE is more significant for smaller step sizes. Figure 6 shows the MSE comparison between the proposed Markov-chain based booster and momentum LMS [25]. The proposed booster uses the Markov chain con- …”
Section: Resultsmentioning
confidence: 99%
“…In practical applications, in order to ensure the stability of the filter, the value of the leakage factor χ is generally 0.95 < χ < 1 [37]. In this section, it is set to χ = 0.97, and the range of the adaptive step size is 0 < μ < 1.97 obtained by (13).…”
Section: B Parameters Selectionmentioning
confidence: 99%