1994
DOI: 10.1016/0893-6080(94)90046-9
|View full text |Cite
|
Sign up to set email alerts
|

Democracy in neural nets: Voting schemes for classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
111
0
4

Year Published

2001
2001
2014
2014

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 273 publications
(118 citation statements)
references
References 9 publications
3
111
0
4
Order By: Relevance
“…Note that the combination of different models for improving the overall classi cation performance has been extensively studied in the neural computation literature; see, for instance, Battiti and Colla (1994) and Husmeier (1999). We here borrow a term frequently used in neural networks research and refer to (29) as a committee of HMMs.…”
Section: A Committee Of Hmmsmentioning
confidence: 99%
“…Note that the combination of different models for improving the overall classi cation performance has been extensively studied in the neural computation literature; see, for instance, Battiti and Colla (1994) and Husmeier (1999). We here borrow a term frequently used in neural networks research and refer to (29) as a committee of HMMs.…”
Section: A Committee Of Hmmsmentioning
confidence: 99%
“…Although it is still possible that the same or very similar rules will be generated between different levels and classes even when using the method described in this section, the probability (dependant on the make-up of the training set) of this happening is smaller, when including the examples belonging to sibling classes as negative examples when inducing rules. Recall that to make an effective ensemble it is very important that the component classifiers be diverse, even if at the expense of some accuracy [34], [5], [1].…”
Section: Technical Details Of the Hehrs Methodsmentioning
confidence: 99%
“…Skalak [34] discusses an example of this phenomenon where a classifier that is 69% accurate is combined with classifiers that are 23% accurate and 25% accurate, and this boosts overall accuracy to 88%. The diversity of the component classifiers is very important in ensemble approaches, because component classifiers must make different errors to make the overall ensemble more accurate [5], [1]. There is often a trade-off between accuracy and diversity in classifiers, as it is often easier to make more diverse (uncorrelated) classifiers when the classification accuracies of the individual classifiers are lowered.…”
Section: Hierarchical Ensemble Of Hierarchical Rule Sets (Hehrs)mentioning
confidence: 99%
“…In this last decade one of the main research areas in machine learning has been represented by methods for constructing ensembles of learning machines. Although in the literature [86,129,130,69,61,23,33,12,7,37] a plethora of terms, such as committee, classifier fusion, combination, aggregation and others are used to indicate sets of learning machines that work together to solve a machine learning problem, in this paper we shall use the term ensemble in its widest meaning, in order to include the whole range of combining methods. This variety of terms and specifications reflects the absence of an unified theory on ensemble methods and the youngness of this research area.…”
Section: Introductionmentioning
confidence: 99%