2012
DOI: 10.1007/s10489-012-0388-2
|View full text |Cite
|
Sign up to set email alerts
|

On the effect of calibration in classifier combination

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
42
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 45 publications
(43 citation statements)
references
References 33 publications
1
42
0
Order By: Relevance
“…We hope that the proposed decompositions provide deeper insight into the causes behind losses and facilitate development of better classification methods, as knowledge about calibration loss has already delivered several calibration methods, see e.g. [2]. Proof of Theorem 3: In Section 4 we proved that both methods provide adjusted scores, so we only need to prove Eq.(1).…”
Section: Discussionmentioning
confidence: 92%
See 2 more Smart Citations
“…We hope that the proposed decompositions provide deeper insight into the causes behind losses and facilitate development of better classification methods, as knowledge about calibration loss has already delivered several calibration methods, see e.g. [2]. Proof of Theorem 3: In Section 4 we proved that both methods provide adjusted scores, so we only need to prove Eq.(1).…”
Section: Discussionmentioning
confidence: 92%
“…For log-loss the decomposition for the two models is 0.717 = 0.090 + 0.628 and 0.684 = 0.056 + 0.628, respectively. In practice, calibration has proved to be an efficient way of decreasing proper scoring rule loss [2]. Calibrating a model means learning a calibration mapping from the model output scores to the respective calibrated probability scores.…”
Section: Calibrated Scores C and The Decomposition L = Cl + Rlmentioning
confidence: 99%
See 1 more Smart Citation
“…The effect of calibration in classifier combination is studied by [2]. Perhaps closest in spirit to our work in this paper is the work by [6] who propose methods to identify and remove unreliable classifiers in a one-vs-one setting.…”
Section: Related Workmentioning
confidence: 99%
“…2 Any calibration procedure which transforms values of g(x) with a calibration map can decrease the group-wise calibration loss but not the grouping loss, which is inherent to the model. Grouping loss arises from the model's decision to group certain instances together with the same probability estimate whereas the true probabilities are different.…”
Section: Calibration and Reliabilitymentioning
confidence: 99%