2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541)
DOI: 10.1109/ijcnn.2004.1380195
|View full text |Cite
|
Sign up to set email alerts
|

Batch learning competitive associative net and its application to time series prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(7 citation statements)
references
References 9 publications
0
7
0
Order By: Relevance
“…Here, β is the undetermined multiplier for the condition (10) in the Lagrange method. Moreover, β derived from (9)- (11) has no relation with n, while β = β n derived from (6) increases exponentially with the increase of n.…”
Section: A Bayesian Bagging Predictionmentioning
confidence: 95%
See 1 more Smart Citation
“…Here, β is the undetermined multiplier for the condition (10) in the Lagrange method. Moreover, β derived from (9)- (11) has no relation with n, while β = β n derived from (6) increases exponentially with the increase of n.…”
Section: A Bayesian Bagging Predictionmentioning
confidence: 95%
“…Note that we have developed an efficient batch learning method (see [11] for details), which we have used in this application.…”
Section: Appendixmentioning
confidence: 99%
“…Note that we have developed an efficient batch learning method (see [6] for details), and we use it in the present competition. The method consists of iterations of (1) competitive learning based on a gradient method, (2) associative learning employing recursive least squares, and (3) reinitialization of units based on an "asymptotic optimality" criterion (see [4]) for overcoming local minima problems of the gradient method.…”
Section: Can2 and The Bagging 1) Assumptions On The Given Dataset:mentioning
confidence: 99%
“…The method consists of iterations of (1) competitive learning based on a gradient method, (2) associative learning employing recursive least squares, and (3) reinitialization of units based on an "asymptotic optimality" criterion (see [4]) for overcoming local minima problems of the gradient method. We have used the same parameter values as for the function approximation problems shown in [6], except the number of units involved in the CAN2 which is tuned so that the prediction achieves smaller SMAPE for the validation periods (see Section II-B and Section III).…”
Section: Can2 and The Bagging 1) Assumptions On The Given Dataset:mentioning
confidence: 99%
“…In order to optimize w l i and M l i for i ∈ I l , we have developed an efficient batch learning method (see [7], [8] for details), which we have used at the Competition. For focusing on new method for the Competition, we here omit to describe the learning method.…”
Section: A Prediction By the Can2 And Can2 Ensemblementioning
confidence: 99%