2018
DOI: 10.1177/1748301818761132
|View full text |Cite
|
Sign up to set email alerts
|

The effectiveness of using diversity to select multiple classifier systems with varying classification thresholds

Abstract: In classification applications, the goal of fusion techniques is to exploit complementary approaches and merge the information provided by these methods to provide a solution superior than any single method. Associated with choosing a methodology to fuse pattern recognition algorithms is the choice of algorithm or algorithms to fuse. Historically, classifier ensemble accuracy has been used to select which pattern recognition algorithms are included in a multiple classifier system. More recently, research has f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 40 publications
(75 reference statements)
0
4
0
Order By: Relevance
“…Empirically, ensembles tend to yield better results than a single model when there is a significant diversity among the models [17]. For the past few decades, many studies have been focusing on accuracy and diversity of ensemble methods in either regression [18] or classification case [19][20][21]. (Krogh and Vedelsby (1994) [22]) proposed ambiguity decomposition and a computable approach to minimize the quadratic error of the ensemble estimator, while (Ueda and Nakano (1996) [23]) derived a general expression of bias-variance-covariance decomposition.…”
Section: Theoretical Foundationmentioning
confidence: 99%
“…Empirically, ensembles tend to yield better results than a single model when there is a significant diversity among the models [17]. For the past few decades, many studies have been focusing on accuracy and diversity of ensemble methods in either regression [18] or classification case [19][20][21]. (Krogh and Vedelsby (1994) [22]) proposed ambiguity decomposition and a computable approach to minimize the quadratic error of the ensemble estimator, while (Ueda and Nakano (1996) [23]) derived a general expression of bias-variance-covariance decomposition.…”
Section: Theoretical Foundationmentioning
confidence: 99%
“…Empirically, ensembles tend to yield better results than a single model when there is a significant diversity among the models [18]. For the past few decades, many studies have been focusing on accuracy and diversity of ensemble methods in either regression [6] or classification case [1,11,7]. (Krogh and Vedelsby (1994) [15]) proposed ambiguity decomposition and a computable approach to minimize the quadratic error of the ensemble estimator, while (Ueda and Nakano (1996) [25]) derived a general expression of bias-variance-covariance decomposition.…”
Section: Theoretical Foundationmentioning
confidence: 99%
“…It follows that how to stably divide a data set into reasonable partitions without introducing bias is a problem of concern. For DC‐type methods, how to combine results from individual partitions into a useful result is an essential problem, especially when results from individual partitions are inconsistent (Wang, 2008; Butler et al, 2018; Yu, 2018). Milicchio & Gehrke (2007) discussed the pros and cons of different architectures for building and maintaining a large cluster.…”
Section: Introductionmentioning
confidence: 99%