2009
DOI: 10.1109/9780470544037
|View full text |Cite
|
Sign up to set email alerts
|

A Field Guide to Dynamical Recurrent Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
62
0
1

Year Published

2011
2011
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 82 publications
(63 citation statements)
references
References 0 publications
0
62
0
1
Order By: Relevance
“…For this purpose, the parametric bias (PB) vector is learned simultaneously and unsupervised during normal training of the network. The prediction error with respect to the desired output is determined and backpropagated through time using the BPTT algorithm [9]. However, the error is not only used to correct all the synaptic weights present in the Elman-type network.…”
Section: Storagementioning
confidence: 99%
See 1 more Smart Citation
“…For this purpose, the parametric bias (PB) vector is learned simultaneously and unsupervised during normal training of the network. The prediction error with respect to the desired output is determined and backpropagated through time using the BPTT algorithm [9]. However, the error is not only used to correct all the synaptic weights present in the Elman-type network.…”
Section: Storagementioning
confidence: 99%
“…Mostly, the architecture is utilized to model the mirror neuron system [7,8]. Here we apply the variant proposed by Cuijpers et al [8] using an Elman-type structure [9] at its core. Furthermore, we modify the training algorithm to include adaptive learning rates for training of the weights, as well as the PB values.…”
Section: Theorymentioning
confidence: 99%
“…[10], and Dobnikar and Šter [16]. Kolen and Kremer [33] used the concept of recurrent neural networks as advancement over feed forward neural networks. In a closely related description, Hawkins and Boden [25] stated that "as compared to feedforward neural networks recurrent neural networks generally performed better on sequence analysis tasks".…”
Section: Neural Network and Recurrent Neural Networkmentioning
confidence: 99%
“…In principle, recurrent neural networks might be employed by converting the tree into a sequen ce (see, e.g. [25] and [33]). …”
Section: Neural Networkmentioning
confidence: 99%
“…However the training of complex sequences with long-term dependencies is difficult [3], for example the training of a multiple timescale recurrent neural network (MTRNN) with reasonable network size is computationally intensive, requiring up to one million training epochs for complex sequences [4]. One of the issues for these networks with a large number of parameters is to identify good learning rates for the weight updates during the learning depending on the given problem respective the specific network parameters and the shape of the sequences.…”
Section: Introductionmentioning
confidence: 99%