2009
DOI: 10.1093/jigpal/jzp043
|View full text |Cite
|
Sign up to set email alerts
|

Structure optimization of reservoir networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0
9

Year Published

2012
2012
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(16 citation statements)
references
References 24 publications
0
7
0
9
Order By: Relevance
“…The spectral radius |λ max | of the weight matrix W plays a crucial role in determining the dynamics that will take place in the recurrent network. Other factors, such as smallworld degree, scale-free regimes, and bio-inspired axonal growth patterns, have also been shown to positively influence the capabilities of the reservoir [242]. On the other hand, a recent theoretical analysis by Zhang et al argues that all random reservoir topologies asymptotically converge to the same distribution of eigenvalues, implying that the topology is relatively indifferent after all [307].…”
Section: Reservoir Computingmentioning
confidence: 99%
“…The spectral radius |λ max | of the weight matrix W plays a crucial role in determining the dynamics that will take place in the recurrent network. Other factors, such as smallworld degree, scale-free regimes, and bio-inspired axonal growth patterns, have also been shown to positively influence the capabilities of the reservoir [242]. On the other hand, a recent theoretical analysis by Zhang et al argues that all random reservoir topologies asymptotically converge to the same distribution of eigenvalues, implying that the topology is relatively indifferent after all [307].…”
Section: Reservoir Computingmentioning
confidence: 99%
“…To this end, there are various methods in the literature (Chatzidimitriou and Mitkas 2013; Roeschies and Igel 2010;Whiteson and Stone 2006). In Whiteson and Stone (2006), NEAT is used to develop ad hoc neural networks without recurrent connections so that learning can be applied using standard error backpropagation updates to further adapt weights toward a solution.…”
Section: Evolutionary Neural Networkmentioning
confidence: 99%
“…In Whiteson and Stone (2006), NEAT is used to develop ad hoc neural networks without recurrent connections so that learning can be applied using standard error backpropagation updates to further adapt weights toward a solution. In Roeschies and Igel (2010), the authors use their own evolutionary method to develop ESNs with competitive results in time series prediction problems. In Chatzidimitriou and Mitkas (2013), reservoirs start minimally and grow through evolution to solve the problem at hand.…”
Section: Evolutionary Neural Networkmentioning
confidence: 99%
“…However, to create networks that will adapt to the problem at hand, autonomously, without knowing the capacity (in terms of neurons) required to handle the complexity of the problem, we need evolutionary function approximation methods. To this end, there are various methods in the literature (Chatzidimitriou and Mitkas 2013;Roeschies and Igel 2010;Whiteson and Stone 2006). Basically, these methods select automatically function approximator representations that enable efficient individual learning.…”
Section: Evolutionary Neural Networkmentioning
confidence: 99%