Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 2: Short Papers) 2017
DOI: 10.18653/v1/p17-2008
|View full text |Cite
|
Sign up to set email alerts
|

Incorporating Uncertainty into Deep Learning for Spoken Language Assessment

Abstract: There is a growing demand for automatic assessment of spoken English proficiency.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
123
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 61 publications
(126 citation statements)
references
References 6 publications
3
123
0
Order By: Relevance
“…In this work all graders were constructed to predict scores for each of the five sections, then the scores averaged to yield the final score. Two feature-based graders were built; one GPbased [35] (GPtxt) and the other DNN-based [36] (DNNtxt). The features for these systems were the text features described in [35].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this work all graders were constructed to predict scores for each of the five sections, then the scores averaged to yield the final score. Two feature-based graders were built; one GPbased [35] (GPtxt) and the other DNN-based [36] (DNNtxt). The features for these systems were the text features described in [35].…”
Section: Methodsmentioning
confidence: 99%
“…For the neural assessment system (Neurtxt) 5 , BERT was used to extract the word-embeddings, followed by a multi-headself attention mechanism [37]. The output of this process was then fed to the same DNN configuration as [36]. For the neural systems an ensemble of 10 models were built and the predictions averaged to yield the final score.…”
Section: Methodsmentioning
confidence: 99%
“…The models are not appropriate to describe the data. These failures can be named uncertainty, and Malinin [27] has categorized them into two major groups:…”
Section: Uncertainty Quantification In Deep Learningmentioning
confidence: 99%
“…6(c)). The epistemic uncertainty is caused by an insufficient amount of data, and thus it appears far from data distribution and can detect out-of-distribution samples that are potentially mis-classified or with excessive errors [65,66,75]. With significantly more data samples, the decision of the DNN becomes stable, and the epistemic uncertainty decreases.…”
Section: Uncertainty Quantificationmentioning
confidence: 99%