2022
DOI: 10.1016/j.isatra.2022.02.011
|View full text |Cite
|
Sign up to set email alerts
|

Two-stage extended recursive gradient algorithm for locally linear RBF-based autoregressive models with colored noises

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 79 publications
0
3
0
Order By: Relevance
“…Referring to the method in References 46,68‐70, Equation (13) can be viewed as a discrete‐time system where the state is truebold-italicϑ^false(tfalse)$$ \hat{\boldsymbol{\vartheta}}(t) $$ and the input is μfalse(tfalse)bold-italicξfalse(tfalse)$$ \mu (t)\boldsymbol{\xi} (t) $$. To ensure the convergence of truebold-italicϑ^false(tfalse)$$ \hat{\boldsymbol{\vartheta}}(t) $$, the eigenvalues of matrix false[bold-italicInprefix−μfalse(tfalse)bold-italicRfalse(tfalse)false]$$ \left[{\boldsymbol{I}}_n-\mu (t)\boldsymbol{R}(t)\right] $$ must be in the unit circle, this is to say μfalse(tfalse)$$ \mu (t) $$ should satisfy prefix−bold-italicInbold-italicInprefix−μfalse(tfalse)bold-italicRfalse(tfalse)bold-italicIn$$ -{\boldsymbol{I}}_n\leqslant {\boldsymbol{I}}_n-\mu (t)\boldsymbol{R}(t)\leqslant {\boldsymbol{I}}_n $$, so a reasonable choice of μfalse(tfalse)$$ \mu (t) $$ is to satisfy μfalse(tfalse)2λmaxfalse[bold-italicRfalse(tfalse)false]=2λmaxprefix−1false[bold-italicRfalse(tfalse)false].$$ \mu (t)\leqslant \frac{2}{\lambda_{\mathrm{max}}\left[\boldsymbol{R}(t)\right]}=2{\lambda}_{\mathrm{max}}^{-1}\left[\boldsymbol{R}(t)\right].…”
Section: The Key Term Separation Auxiliary Model Recursive Gradient A...mentioning
confidence: 99%
See 1 more Smart Citation
“…Referring to the method in References 46,68‐70, Equation (13) can be viewed as a discrete‐time system where the state is truebold-italicϑ^false(tfalse)$$ \hat{\boldsymbol{\vartheta}}(t) $$ and the input is μfalse(tfalse)bold-italicξfalse(tfalse)$$ \mu (t)\boldsymbol{\xi} (t) $$. To ensure the convergence of truebold-italicϑ^false(tfalse)$$ \hat{\boldsymbol{\vartheta}}(t) $$, the eigenvalues of matrix false[bold-italicInprefix−μfalse(tfalse)bold-italicRfalse(tfalse)false]$$ \left[{\boldsymbol{I}}_n-\mu (t)\boldsymbol{R}(t)\right] $$ must be in the unit circle, this is to say μfalse(tfalse)$$ \mu (t) $$ should satisfy prefix−bold-italicInbold-italicInprefix−μfalse(tfalse)bold-italicRfalse(tfalse)bold-italicIn$$ -{\boldsymbol{I}}_n\leqslant {\boldsymbol{I}}_n-\mu (t)\boldsymbol{R}(t)\leqslant {\boldsymbol{I}}_n $$, so a reasonable choice of μfalse(tfalse)$$ \mu (t) $$ is to satisfy μfalse(tfalse)2λmaxfalse[bold-italicRfalse(tfalse)false]=2λmaxprefix−1false[bold-italicRfalse(tfalse)false].$$ \mu (t)\leqslant \frac{2}{\lambda_{\mathrm{max}}\left[\boldsymbol{R}(t)\right]}=2{\lambda}_{\mathrm{max}}^{-1}\left[\boldsymbol{R}(t)\right].…”
Section: The Key Term Separation Auxiliary Model Recursive Gradient A...mentioning
confidence: 99%
“…From the above derivation, we can obtain the KT‐AM‐RG algorithm 68‐70 : alignleftalign-1ϑ^(t)align-2=ϑ^(t1)+1r(t)[ξ(t)R(t)ϑ^(t1)],$$ \hat{\boldsymbol{\vartheta}}(t)\kern0.5em =\hat{\boldsymbol{\vartheta}}\left(t-1\right)+\frac{1}{r(t)}\left[\boldsymbol{\xi} (t)-\boldsymbol{R}(t)\hat{\boldsymbol{\vartheta}}\left(t-1\right)\right], $$ alignleftalign-1r(t)align-2=r(t1)+φ^(t)2,$$ r(t)\kern0.5em =r\left(t-1\right)+{\left\Vert \hat{\boldsymbol{\varphi}}(t)\right\Vert}^2, $$ alignleftalign-1ξ(t)align-2=ξ(t1)+φ^(t)y(t)n,$$ \boldsymbol{\xi} (t)\kern0.5em =\boldsymbol{\xi} \left(t-1\right)+\hat{\boldsymbol{\varphi}}(t)y(t)\in {\mathbb{R}}^n, $$ alignleftalign-1R(t)align-2=R(t1...…”
Section: The Key Term Separation Auxiliary Model Recursive Gradient A...mentioning
confidence: 99%
“…For the ExpAR model, Xu et al proposed a filter-based three-stage multi-innovation extended stochastic gradient algorithm [19] to estimate the parameters. In order to estimate the parameters of the RBF-AR model, Zhou et al provided a two-stage extended recursive gradient algorithm [20]. Gan et al proposed a variable projection (VP) algorithm to estimate parameters for plenty of hybrid models like the ExpAR model [21], the RBF-AR model [22] and the GRBF-AR model [16].…”
Section: Introductionmentioning
confidence: 99%