1998
DOI: 10.1109/5.726790
|View full text |Cite
|
Sign up to set email alerts
|

A signal processing framework based on dynamic neural networks with application to problems in adaptation, filtering, and classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
58
0

Year Published

1998
1998
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 108 publications
(58 citation statements)
references
References 19 publications
0
58
0
Order By: Relevance
“…The results of our experiments show that the use of the stabilization matrix has correctly addressed this significant issue, and the concept has been used to attack the long-standing problem of efficiently training RNNs to solve the tracking problem. Our approach contrasts to conventional RNN research which tends to focus on the complementary problem known as "vanishing" gradients (Hochreiter and Schmidhuber, 1997), or superior optimizers (Feldkamp and Puskorius, 1998). We have described how the method we proposed can be used in conjunction with these more powerful optimizers.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The results of our experiments show that the use of the stabilization matrix has correctly addressed this significant issue, and the concept has been used to attack the long-standing problem of efficiently training RNNs to solve the tracking problem. Our approach contrasts to conventional RNN research which tends to focus on the complementary problem known as "vanishing" gradients (Hochreiter and Schmidhuber, 1997), or superior optimizers (Feldkamp and Puskorius, 1998). We have described how the method we proposed can be used in conjunction with these more powerful optimizers.…”
Section: Discussionmentioning
confidence: 99%
“…Of course more sophisticated learning optimizers might be able to navigate a crinkly surface better than RPROP did (such as the multi-stream extended Kalman filter algorithm (Feldkamp and Puskorius, 1998), or conjugate gradient descent), but it would be expected that the stabilization-matrix method would assist these second-order algorithms to achieve better RNNs than they otherwise would.…”
Section: Using the Stabilization Matrixmentioning
confidence: 99%
“…The operations described by (13) and (14) can be carried out using simple linear perceptrons, connected recurrently in a network structure shown in Fig.1. Fig.1.…”
Section: Neural Network Technique For the Real-time Dynamic Error Cormentioning
confidence: 99%
“…Classical dynamic error correction algorithms are usually characterized by high complexity of numerical operations, in particular in the case of describing the transducer dynamics by means of higher order differential equations. ANN as "universal approximators" [12,13,14] have been widely used for transducer static error correction [15,16,17,18], in particular for transducer and measuring instrument calibration [19,20,21]. Nevertheless, in the field of realtime dynamic error correction, solutions using DSP [22,23,24], FPGA technique [25] and analog circuits [26,27] are dominant.…”
Section: Introductionmentioning
confidence: 99%
“…[3][4][5] Neural networks have also been widely applied to model other strongly nonlinear, complex industrial processes containing a large number of process variables. Applications range from modeling effluent concentration of wastewater treatment plants 6 and automotive exhaust catalyst operation 7 to modeling nitro-cellulose production 6 and polymer production 8 in chemical plants. Neural networks are prime candidates for modeling complex dynamical processes due to their ability to approximate large classes of nonlinear functions with sufficient accuracy.…”
Section: Introductionmentioning
confidence: 99%