1996
DOI: 10.1109/3477.484446
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing MLP networks using a distributed data representation

Abstract: Multilayer perceptron (MLP) networks trained using backpropagation can be slow to converge in many instances. The primary reason for slow learning is the global nature of backpropagation. Another reason is the fact that a neuron in an MLP network functions as a hyperplane separator and is therefore inefficient when applied to classification problems in which decision boundaries are nonlinear. This paper presents a data representational approach that addresses these problems while operating within the framework… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2005
2005
2015
2015

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…We compared the performance of these four methods, in addition to the previously published symmetrical placement method [27], on three wellknown benchmark data sets, with the maximum number of receptors per attribute varying from 3 to 5. To reduce the impact of random noise, we used leave-one-out cross-validation, and repeated all experiments three times.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…We compared the performance of these four methods, in addition to the previously published symmetrical placement method [27], on three wellknown benchmark data sets, with the maximum number of receptors per attribute varying from 3 to 5. To reduce the impact of random noise, we used leave-one-out cross-validation, and repeated all experiments three times.…”
Section: Resultsmentioning
confidence: 99%
“…Although the symmetrical placement technique was presented in [27] for the case of only three receptors per attribute, it can be generalized to r receptors. For a given attribute, a 1 and a r are set to the minimum and maximum values, respectively, for that attribute; each a i , for i=2,...,rÀ1, is assigned according to …”
Section: Symmetrical Placementmentioning
confidence: 98%
See 2 more Smart Citations
“…Thus, a network can adapt to one pattern while changing its response to unlike patterns only slightly. This property can lead to local leaning and previous work [7,8] has shown that ensemble encoding of network inputs can accelerate learning in MLP networks. The intent of the current work is to examine the impact of ensemble encoding on the incremental learning ability of MLP networks.…”
Section: Local Learningmentioning
confidence: 95%