IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)
DOI: 10.1109/ijcnn.2001.939003
|View full text |Cite
|
Sign up to set email alerts
|

A complex EKF-RTRL neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0
2

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 10 publications
0
6
0
2
Order By: Relevance
“…Where R and I mean real and imaginary parts respectively, W a is a scalar, W b is a 1-by-13 matrix, C is a scalar and The neural network is trained by a Kalman filter for complex valued signals as described by the first author in [11]. The training procedure was enhanced in this paper by the use of heuristic fine-tuning techniques for the Kalman filtering.…”
Section: A the Equalizer Structurementioning
confidence: 99%
See 1 more Smart Citation
“…Where R and I mean real and imaginary parts respectively, W a is a scalar, W b is a 1-by-13 matrix, C is a scalar and The neural network is trained by a Kalman filter for complex valued signals as described by the first author in [11]. The training procedure was enhanced in this paper by the use of heuristic fine-tuning techniques for the Kalman filtering.…”
Section: A the Equalizer Structurementioning
confidence: 99%
“…Details of the Kalman trained recurrent neural network algorithm can be found in [11]. Table I summarizes the recurrent neural network Kalman training for the proposed equalizer including the tuning procedure just described.…”
Section: A the Equalizer Structurementioning
confidence: 99%
“…Known disadvantages of gradient-based methods are slow convergence rates and long training symbols necessary for suitable performance of neural networks based devices.. In order to overcome such problems Kalman filter trained neural networks has been considered in the literature [2], [3], [4], [5], [6].…”
Section: Introductionmentioning
confidence: 99%
“…The essence of the recursive EKF procedure is that an approximate covariance matrix is generated, which encapsulates second-order information about the training problem considered and the elements of the matrix evolve during the training process. Since Singhal and Wu introduced the EKF training algorithm in [20] in the context of static forward neural networks (FNNs), the EKF has constituted the basis of computationally efficient neural network based training techniques that facilitate the application of FNNs and RNNs in diverse problems such as pattern classification [15], [16], control [21], [22], channel equalization [23], [24], [25], [26], etc.…”
Section: Training Algorithms For Fcrnnsmentioning
confidence: 99%