2010
DOI: 10.4236/jilsa.2010.24021
|View full text |Cite
|
Sign up to set email alerts
|

An Autonomous Incremental Learning Algorithm for Radial Basis Function Networks

Abstract: In this paper, an incremental learning model called Resource Allocating Network with Long-Term Memory (RAN-LTM)is extended such that the learning is conducted with some autonomy for the following functions: 1) data collection for initial learning, 2) data normalization, 3) addition of radial basis functions (RBFs), and 4) determination of RBF centers and widths. The proposed learning algorithm called Autonomous Learning algorithm for Resource Allocating Network (AL-RAN) is divided into the two learning phases… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2012
2012
2016
2016

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…In conventional RAN, the problem is solved by learning local region which is approximated using C 1 continuous polynomial without calculating the RBF outputs (Platt 1991). Whereas, the same goal is achieved by another variant of RAN, RAN with Long Term Memory (RAN-LTM) where it carried out the learning with memory items that are stored in LTM (Kobayashi et al 2001, Okamoto et al 2003, Ozawa et al 2010). Nevertheless, we use different approach which can be easily implemented by using LSH where we are able to select active RBF units.…”
Section: : Loopmentioning
confidence: 99%
“…In conventional RAN, the problem is solved by learning local region which is approximated using C 1 continuous polynomial without calculating the RBF outputs (Platt 1991). Whereas, the same goal is achieved by another variant of RAN, RAN with Long Term Memory (RAN-LTM) where it carried out the learning with memory items that are stored in LTM (Kobayashi et al 2001, Okamoto et al 2003, Ozawa et al 2010). Nevertheless, we use different approach which can be easily implemented by using LSH where we are able to select active RBF units.…”
Section: : Loopmentioning
confidence: 99%
“…It needs to keep intact the formed clusters while incrementally accommodating the new data. Formation of the clusters with respect to the underlying pattern [11] of the data is now considered during the process of learning incrementally [12].…”
Section: Related Workmentioning
confidence: 99%
“…Figure 3 illustrates how labeled spam emails are trained in (a) batch learning scheme and (b) incremental learning scheme. In the batch learning scheme, we adopt the conventional RBF network (RBFN) (i.e., RBFN usually used as batch learning [25]) as a classifier and a sliding window is introduced to define a data set to be trained every day. In this experiment, the time-window size is preliminarily determined as 12 days via the crossvalidation using the spam emails collected during a different period.…”
Section: Methodsmentioning
confidence: 99%