2014 IEEE Conference on Evolving and Adaptive Intelligent Systems (EAIS) 2014
DOI: 10.1109/eais.2014.6867468
|View full text |Cite
|
Sign up to set email alerts
|

Evolving neural network with extreme learning for system modeling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0
2

Year Published

2015
2015
2022
2022

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 23 publications
(9 citation statements)
references
References 25 publications
0
7
0
2
Order By: Relevance
“…36. The difference compared to the previous example is that here the parameters a, b and c are not fixed, but change over time according to the equations in [58], where the detailed description of experiment is also given.…”
Section: E Comparison Of the Efumo To Other Similar Methodsmentioning
confidence: 96%
See 1 more Smart Citation
“…36. The difference compared to the previous example is that here the parameters a, b and c are not fixed, but change over time according to the equations in [58], where the detailed description of experiment is also given.…”
Section: E Comparison Of the Efumo To Other Similar Methodsmentioning
confidence: 96%
“…The two tables demonstrate two different experimental setups. In Table III the experiment is setup as described in [58]. All the data are used for testing [59] 10 0.0129 RANEKF [60] 11 0.0184 simpl eTS [61] 18 0.0122 eTS [29] 19 0.0082 SONFIN [25] 10 0.013 SAFIN [57] 13 0.007 eFuMo [48] 12 0.0035 and validation.…”
Section: E Comparison Of the Efumo To Other Similar Methodsmentioning
confidence: 99%
“…where the index k again denotes the class index k = 1, ...,C. z t denotes the regressor vector of the current sample, η is the current kalman gain vector, I L t s is an identity matrix based on the number of neurons in the second layer, L t s × L t s ; ψ ∈]0, 1] denotes forgetting factor (1 per default). Q denotes the inverse Hessian matrix Q = (Z T sel Z sel ) −1 and is set initially as ωI L t s , where ω = 1000 [25]. This matrix is directly and incrementally updated by the second equation above without requiring (time-consuming and possibly unpredictable) matrices re-inversion.…”
Section: Training and Model Interpretabilitymentioning
confidence: 99%
“… denotes the regressor (row) vector of the current sample (i.e., the activation levels of all neurons in the current stream sample), J is the current kalman gain (row) vector and is an identity matrix based on the number of neurons in the second layer, ; denotes a possible forgetting factor, but is to 1 per default (no forgetting). Q denotes the inverse Hessian matrix and is set initially as , where is a big number (e.g., 1000), such as in Rosa et al [ 36 ]; see also Chapter 2 in [ 37 ] for a detailed convergence analysis. This matrix is directly and incrementally updated by the second equation above without requiring (time-consuming and possibly unpredictable) re-inversion of matrices.…”
Section: Evolving Fuzzy Neural Network and Our Approach: Efnn-lnmentioning
confidence: 99%