2018 IEEE Spoken Language Technology Workshop (SLT) 2018
DOI: 10.1109/slt.2018.8639697
|View full text |Cite
|
Sign up to set email alerts
|

A Prompt-Aware Neural Network Approach to Content-Based Scoring of Non-Native Spontaneous Speech

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 21 publications
0
7
0
Order By: Relevance
“…Craighead et al (2020) explored text-based auxiliary tasks and train models in a multi-task manner using speech transcription and found the L1 prediction task to benefit the scoring performance. Recent work also explored specific aspects of speech scoring like response content scoring, where, the features from the transcription of response are modeled with a respective question to learn the relevance of response (Yoon and Lee 2019;Qian et al 2018). Qian et al (2019) build over the work done by Qian et al (2018) and model acoustic cues, prompt, and grammar features to improve scoring performance.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Craighead et al (2020) explored text-based auxiliary tasks and train models in a multi-task manner using speech transcription and found the L1 prediction task to benefit the scoring performance. Recent work also explored specific aspects of speech scoring like response content scoring, where, the features from the transcription of response are modeled with a respective question to learn the relevance of response (Yoon and Lee 2019;Qian et al 2018). Qian et al (2019) build over the work done by Qian et al (2018) and model acoustic cues, prompt, and grammar features to improve scoring performance.…”
Section: Related Workmentioning
confidence: 99%
“…Recent work also explored specific aspects of speech scoring like response content scoring, where, the features from the transcription of response are modeled with a respective question to learn the relevance of response (Yoon and Lee 2019;Qian et al 2018). Qian et al (2019) build over the work done by Qian et al (2018) and model acoustic cues, prompt, and grammar features to improve scoring performance. Singla et al (2021a) in a recent work, use speech and text transformers (Shah et al 2021) to score candidate speech.…”
Section: Related Workmentioning
confidence: 99%
“…Linguistic features have also attracted research interests. As in [12], a prompt-aware feature was proposed for spontaneous speech evaluation. A context-aware GOP was proposed in [13] to incorporate phone transition factor and phone duration factor into the calculation of GOP.…”
Section: Introductionmentioning
confidence: 99%
“…Long short-term memory recurrent network (LSTM) was adopted in pronunciation assessment [19,20]. More recently, attention mechanism has also been applied [21,15,12] to speech evaluation. These studies have presented promising improvement on speech evaluation performance in the language specific tasks.…”
Section: Introductionmentioning
confidence: 99%
“…In the automated scoring area, several researchers have explored the use of diverse neural networks for essay scoring (Farag et al, 2018;Alikaniotis et al, 2016;Dong and Zhang, 2016) and spontaneous speech scoring (Chen et al, 2018a;Qian et al, 2018a,b) and they achieved comparable or superior performance to the sophisticated linguistic feature-based system. In particular, Qian et al (2018b) trained an automated scoring model covering the content aspect and achieved a further improvement over the generic model without content modeling.…”
Section: Introductionmentioning
confidence: 99%