2015
DOI: 10.1007/s00180-015-0571-0
|View full text |Cite
|
Sign up to set email alerts
|

A simple method for combining estimates to improve the overall error rates in classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(11 citation statements)
references
References 16 publications
0
11
0
Order By: Relevance
“…We will prove that these properties hold for the possibilistic approach that we introduce in the next section at least asymptotically for some of them. Observe that (3) asymptotically implies properties (a) to (c) so the added value of our approach as compared [1] relies on memory complexity and scalability w.r.t. as numerical experiments will illustrate in section 4.…”
Section: Desirable Properties For Classifier Combinationmentioning
confidence: 99%
See 1 more Smart Citation
“…We will prove that these properties hold for the possibilistic approach that we introduce in the next section at least asymptotically for some of them. Observe that (3) asymptotically implies properties (a) to (c) so the added value of our approach as compared [1] relies on memory complexity and scalability w.r.t. as numerical experiments will illustrate in section 4.…”
Section: Desirable Properties For Classifier Combinationmentioning
confidence: 99%
“…Although properties (a) to (c) are not as strong as (3), adaSPOCC is a scalable aggregation technique as the number of parameters it requires to learn from the validation set is in O ( K) while the number of parameters to learn from D val in [1] is in O K and is therefore doomed to overfit when K is large.…”
Section: Fully Adaptive Aggregationmentioning
confidence: 99%
“…In the second step, for each divergence, a very simple predictive model is fit per cluster. The final step provides an adaptive global predictive model by aggregating, thanks to a consensus idea introduced by Mojirsheibani (1999), several models built for the different instances, corresponding to the different Bregman divergences (see also Mojirsheibani (2000); Balakrishnan and Mojirsheibani (2015); Biau et al (2016); Fischer and Mougeot (2019)). We name this procedure the KFC procedure for K-means/Fit/Consensus.…”
Section: Introductionmentioning
confidence: 99%
“…Note that more regular versions, based on smoothing kernels, have also been developed (Mojirsheibani (2000)). A numerical comparison study of several combining schemes is available in Mojirsheibani (2002b), and recently, a variant of the method has been proposed in Balakrishnan and Mojirsheibani (2015). This strategy has just been adapted in the regression framework by Biau et al (2016).…”
Section: Introductionmentioning
confidence: 99%