2022
DOI: 10.1109/access.2022.3168734
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking Scalable Predictive Uncertainty in Text Classification

Abstract: The authors acknowledge the financial support of VLAIO (Flemish Innovation & Entrepreneurship) through the Baekeland PhD mandate (HBC.2019.2604).

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 82 publications
0
2
0
Order By: Relevance
“…Landeghem et al proposed a combination of deep ensemble and concrete dropout to model predictive uncertainty in natural language processing, specifically in multiclass and multilabel text classification tasks. The proposed method shows superior performance in calibration on i.i.d data, cross-domain classification, and novel class robustness [24]. Rodríguez-Puigvert et al apply MC-Dropout in all layers of the DCNN-based encoder to produce better uncertainty quantification for robotic perception.…”
Section: ) Recent Bdl Applicationsmentioning
confidence: 99%
“…Landeghem et al proposed a combination of deep ensemble and concrete dropout to model predictive uncertainty in natural language processing, specifically in multiclass and multilabel text classification tasks. The proposed method shows superior performance in calibration on i.i.d data, cross-domain classification, and novel class robustness [24]. Rodríguez-Puigvert et al apply MC-Dropout in all layers of the DCNN-based encoder to produce better uncertainty quantification for robotic perception.…”
Section: ) Recent Bdl Applicationsmentioning
confidence: 99%
“…Lastly, due to the way that uncertainty estimates are evaluated, investigating distributional shift in NLP is also of interest, for instance through the work of Arora et al (2021), Kamath et al (2020), who focus on question answering and Tan et al (2019) for text classification. The most similar work to ours is the text classification uncertainty benchmark by Van Landeghem et al (2022), however they do not consider the impact of data or language, and test a different selection of models.…”
Section: Introductionmentioning
confidence: 99%