2023
DOI: 10.3390/make5030043
|View full text |Cite
|
Sign up to set email alerts
|

Classification Confidence in Exploratory Learning: A User’s Guide

Abstract: This paper investigates the post-hoc calibration of confidence for “exploratory” machine learning classification problems. The difficulty in these problems stems from the continuing desire to push the boundaries of which categories have enough examples to generalize from when curating datasets, and confusion regarding the validity of those categories. We argue that for such problems the “one-versus-all” approach (top-label calibration) must be used rather than the “calibrate-the-full-response-matrix” approach … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 26 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?