Proceedings of International Conference on Neural Networks (ICNN'96)
DOI: 10.1109/icnn.1996.549082
|View full text |Cite
|
Sign up to set email alerts
|

Modular neural network architectures for classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 21 publications
(18 citation statements)
references
References 8 publications
0
18
0
Order By: Relevance
“…Figure 4 shows the pattern distribution of the training set and the real boundary between two classes. In back-propagation NNs, the training objective is to minimize the error E (Of course, early stopping criteria [19][20][21] is used to prevent from overtraining), instead of finding (or approaching) the real border between two classes. Here…”
Section: Explaining Why Rpt-hicl Is Better Than Original Hiclmentioning
confidence: 99%
See 2 more Smart Citations
“…Figure 4 shows the pattern distribution of the training set and the real boundary between two classes. In back-propagation NNs, the training objective is to minimize the error E (Of course, early stopping criteria [19][20][21] is used to prevent from overtraining), instead of finding (or approaching) the real border between two classes. Here…”
Section: Explaining Why Rpt-hicl Is Better Than Original Hiclmentioning
confidence: 99%
“…The NN classifiers have some merits over traditional pattern recognition systems with respect to the capability of adaptive learning, generalization ability with noisy or sparse learning data, and feasibility for hardware implementation [1]. Current NN classifiers suffer from major drawback of high internal interference because of the strong coupling among their hidden-layer weights [2].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…In order to avoid large computational cost and overfitting, a method called early stopping based on validation set is used as the stopping criteria. Please refer to [22] for details.…”
Section: A Electronic Image Files (Optional)mentioning
confidence: 99%
“…In [28,29] Although constructive learning algorithms can automatically find an optimal combination, they are still suffering from drawbacks such as inefficiency in utilizing network resources as the task (and the network) gets larger, and inability of the current learning schemes to cope with high-complexity tasks [14]. Large networks tend to introduce high internal interference because of the strong coupling among their input-tohidden layer weights [15].…”
Section: Introductionmentioning
confidence: 99%