Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)
DOI: 10.1109/icnn.1994.374511
|View full text |Cite
|
Sign up to set email alerts
|

Optimal linear combinations of neural networks: an overview

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0
1

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 52 publications
(20 citation statements)
references
References 8 publications
0
19
0
1
Order By: Relevance
“…Many researchers in other fields have applied classifier ensemble methods, which can generate predictions that are theoretically [46] and empirically [7,39] better than those of individual classifiers. However, all existing prediction models of software project risk are based on individual classifiers [1,48,61], and sufficient ensemble models have not been developed.…”
Section: Accepted Manuscriptmentioning
confidence: 99%
“…Many researchers in other fields have applied classifier ensemble methods, which can generate predictions that are theoretically [46] and empirically [7,39] better than those of individual classifiers. However, all existing prediction models of software project risk are based on individual classifiers [1,48,61], and sufficient ensemble models have not been developed.…”
Section: Accepted Manuscriptmentioning
confidence: 99%
“…A variety of schemes have been proposed for combining multiple classifiers, such as the majority vote [15] , the weighted average [16] , the Bayes approach [17] , the Dempster-Shafer theory [18] and combination by neural networks [19] . A major issue in the proposed system is how to choose appropriate ensemble of classifiers to be integrated in the multiple classifier system.…”
Section: Algorithm Of Combinationmentioning
confidence: 99%
“…However, the network may not be the optimum when it is applied to the whole population. Hashem et al [7] proposed using optimal linear combinations of a number of trained neural networks instead of using a single best network. Each component network can have a different architecture and/or training parameters.…”
Section: Multiple Neural Network Approachesmentioning
confidence: 99%