2013
DOI: 10.14257/ijsip.2016.9.5.02
|View full text |Cite
|
Sign up to set email alerts
|

Clustering Algorithms in Echo State Networks

Abstract: In this work, we develop a new method of setting the input to reservoir and reservoir to reservoir weights in echo state machines. We use a clustering technique which we have previously developed as a pre-processing stage to set the reservoir parameters which at this stage are prototypes. We then use these prototypes as weights in the standard architecture while setting the reservoir to output weights in a standard manner. We show results on a variety of data sets in the literature which show that this method … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…Yet another family of unsupervised learning algorithms are clustering techniques. In [24], the inverse weighted K -means (IWK) algorithm [25], [26] was proposed to initialize the weight matrices of an ESN. After randomly initializing the input weights, they applied IWK to the neuron inputs and adapted the input weights.…”
mentioning
confidence: 99%
“…Yet another family of unsupervised learning algorithms are clustering techniques. In [24], the inverse weighted K -means (IWK) algorithm [25], [26] was proposed to initialize the weight matrices of an ESN. After randomly initializing the input weights, they applied IWK to the neuron inputs and adapted the input weights.…”
mentioning
confidence: 99%
“…Yet another family of unsupervised learning algorithms are clustering techniques. In [18], the Inverse Weighted K-Means (IWK) algorithm [19], [20] was proposed to initialize the weight matrices of an ESN. After randomly initializing the input weights, they applied IWK to the neuron inputs and adapted the input weights.…”
Section: Introductionmentioning
confidence: 99%