2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation 2014
DOI: 10.1109/uksim.2014.36
|View full text |Cite
|
Sign up to set email alerts
|

Reducing Complexity of Echo State Networks with Sparse Linear Regression Algorithms

Abstract: In this paper the use of sparse linear regression algorithms in echo state networks (ESN) is presented for reducing the number of readouts and improving the robustness and generalization properties of ESNs. Three data sets with overall 80 tests are used to validate the use of sparse linear regression algorithms for echo state networks. It is shown that it is possible to increase accuracy on the test data sets, not used in the ESN training phase, and in the same time reduce the overall number of the required re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 15 publications
0
6
0
Order By: Relevance
“…Since the training of the readout is formulated as a linear problem, most of the relevant literature can also be applied for this class of networks. As an example, ℓ 1 minimization to achieve sparse readouts was investigated independently in the context of ESNs by Ceperic and Baric and Bianchi et al, with some similar results obtained in a previous study by Butcher et al Additionally, it is possible to consider advanced optimization strategies, support vector algorithms, and many other variations (see Section 7 of Lukoševičius and Jaeger for an overview).…”
Section: Theoretical Properties Of Rc Networkmentioning
confidence: 74%
See 1 more Smart Citation
“…Since the training of the readout is formulated as a linear problem, most of the relevant literature can also be applied for this class of networks. As an example, ℓ 1 minimization to achieve sparse readouts was investigated independently in the context of ESNs by Ceperic and Baric and Bianchi et al, with some similar results obtained in a previous study by Butcher et al Additionally, it is possible to consider advanced optimization strategies, support vector algorithms, and many other variations (see Section 7 of Lukoševičius and Jaeger for an overview).…”
Section: Theoretical Properties Of Rc Networkmentioning
confidence: 74%
“…At the same time, the literature on these topics is vast and fragmented so that equivalent ideas are reintroduced time and again, and it becomes difficult to appreciate the fundamental unity (both theoretical and practical) underlying all these methods. Going back to our previous example, sparse linear regression has been derived independently in all three areas, sometimes more than once in each case …”
Section: Introductionmentioning
confidence: 99%
“…where L is the length of training data, y i is the training label that will be discussed in the next section and λ represents the regularisation parameter that determines the strength of the L 1 penalty [36,37]. Finally, the output X can be written as:…”
Section: Lasso Regressionmentioning
confidence: 99%
“…The use of the LASSO algorithm, where sparsity is obtained by including an additional L 1 regularization term, is derived by Ceperic and Baric [41]. It is also possible to combine ridge regression with the LASSO algorithm, obtaining the so-called elastic net penalty [42].…”
Section: B Sparse Readouts For Esnsmentioning
confidence: 99%
“…It is also possible to combine ridge regression with the LASSO algorithm, obtaining the so-called elastic net penalty [42]. This has been investigated independently by Ceperic and Baric [41], and Bianchi et al [22].…”
Section: B Sparse Readouts For Esnsmentioning
confidence: 99%