IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
DOI: 10.1109/ijcnn.1999.832620
|View full text |Cite
|
Sign up to set email alerts
|

An hybrid architecture for active and incremental learning: the self-organizing perceptron (SOP) network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 8 publications
0
6
0
Order By: Relevance
“…Next, two character classes were chosen, namely the 'n' and 'r' classes, because they contained many badly detected characters in this word data set. From a total of respectively 140 'n' and 120 'r' instances, 67 and 65 badly detected characters were extracted in order to retrain the SOP using the incremental learning mode as described in [13]. These new characters were randomly presented one by one to the SOP, and at the end, the final SOP network contained a total of 216 neurons, that is, 16 new neurons were added to the original SOP.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Next, two character classes were chosen, namely the 'n' and 'r' classes, because they contained many badly detected characters in this word data set. From a total of respectively 140 'n' and 120 'r' instances, 67 and 65 badly detected characters were extracted in order to retrain the SOP using the incremental learning mode as described in [13]. These new characters were randomly presented one by one to the SOP, and at the end, the final SOP network contained a total of 216 neurons, that is, 16 new neurons were added to the original SOP.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…It is that feature that makes it possible to train a SOP network in an incremental fashion by adding new neurons into the structure. For full details, the reader is referred to Hébert et al [13] …”
Section: The Self-organizing Perceptron (Sop)mentioning
confidence: 99%
“…In [HPG99] the authors try to bring a solution to these disadvantages with the use of an unsupervised SOM model : the Growing Neural Gas network [Fri95b] which does not require any preliminary knowledge of the problem such as the number of clusters.This SOM is also used to specialize the hidden layer of an MLP according to the clustering result. This approach can be considered as an extension of [GS98].…”
Section: Distributed Classification Systemmentioning
confidence: 99%
“…Moreover, neural networks also have many other defects that are well known and documented [2,3,4,6,10,15,16]. In particular, it has been shown in [13] that the MLP tends to draw open separation surfaces in the input data space, and thus cannot reliably reject patterns.…”
Section: Introductionmentioning
confidence: 99%
“…So, obtaining good generalisation behaviour with an MLP is not a trivial task when dealing with complex problems, since there is no reliable and generic rule currently available to determine a suitable neural network architecture and this can require long trial and error research [2,3,4,6,15]. Moreover, neural networks also have many other defects that are well known and documented [2,3,4,6,10,15,16]. In particular, it has been shown in [13] that the MLP tends to draw open separation surfaces in the input data space, and thus cannot reliably reject patterns.…”
Section: Introductionmentioning
confidence: 99%