1997
DOI: 10.1109/78.650249
|View full text |Cite
|
Sign up to set email alerts
|

A hybrid least squares QR-lattice algorithm using a priori errors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
9
0

Year Published

2000
2000
2013
2013

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(9 citation statements)
references
References 11 publications
0
9
0
Order By: Relevance
“…From (38), we get (39) where are auxiliary parameters given by . Since the Givens rotation annihilates the first nonzero element of the second row, it can be shown that (40) Combining (38) and (40), we have (41) and…”
Section: Givens-based Approximate Qr-ls Algorithmmentioning
confidence: 98%
See 1 more Smart Citation
“…From (38), we get (39) where are auxiliary parameters given by . Since the Givens rotation annihilates the first nonzero element of the second row, it can be shown that (40) Combining (38) and (40), we have (41) and…”
Section: Givens-based Approximate Qr-ls Algorithmmentioning
confidence: 98%
“…The first class of algorithm recursively updates the inverse of the correlation matrix of the input signal using certain time-and order-recursions [1]- [5], [9]- [11], [21], [22], [39]. The second class of algorithms works directly with the data matrix using QR decomposition (QRD) [Givens rotation or Householder transformation] [6]- [8], [20]- [24], [37], [38], [40]- [42]. Fast RLS algorithms using the QRD usually exhibit better numerical property because of their lower condition number of the system, as compared with that of the input correlation matrix, which is square of that of the data matrix.…”
Section: Introductionmentioning
confidence: 99%
“…A fundamental result in the derivation of the fast QR-RLS has been reported (McWhirter 1983). In recent years, a number of fast QR-RLS algorithms have been introduced, using McWhirter's filter and exploiting forward and backward linear prediction to update the weights (Haykin 1991;Regalia 1993;Yang and Bohme 1992;Morris and Khemaissia 1995;Miranda and Gerken 1997). The purpose of this paper is to introduce a new kind of linearized training algorithm for feedforward neural networks.…”
Section: Introductionmentioning
confidence: 99%
“…Starting from the conventional (or O[N 2 ]) QR decomposition method [5], a number of fast algorithms (O[N ]) were derived [4], [8], [6], [9], [1]. It was shown in [1] that these algorithms can be classified in terms of the type of triangularization applied to the input data matrix (upper or lower triangular) and type of error vector (a posteriori or a priori) used in the updating process.…”
Section: Introductionmentioning
confidence: 99%
“…It was shown in [1] that these algorithms can be classified in terms of the type of triangularization applied to the input data matrix (upper or lower triangular) and type of error vector (a posteriori or a priori) used in the updating process. It can be seen from the Gram-Schmidt orthogonalization procedure that an upper triangularization (in [6], [9] the notation adopted in this work) updates the forward prediction errors, whereas, lower triangularization updates backward prediction errors. Table 1 presents the classification used in [1] and introduces how these algorithms will be designated hereafter.…”
Section: Introductionmentioning
confidence: 99%