2020
DOI: 10.1016/j.specom.2019.12.002
|View full text |Cite
|
Sign up to set email alerts
|

Automatic assessment of English proficiency for Japanese learners without reference sentences based on deep neural network acoustic models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(15 citation statements)
references
References 20 publications
1
14
0
Order By: Relevance
“…First and foremost, the findings supported the emerging evidence for the possibility, reliability, and validity of machine assessments of spontaneous speech samples (e.g., r = .799 in Fu et al, 2020;r = .77 in Chen et al, 2018). Furthermore, our study indicated that such automatic scoring can be used to simulate naïve listeners' intuitive judgments of L2 speech comprehensibility, an anchor of communicative success among English speakers in global contexts.…”
Section: Discussionsupporting
confidence: 72%
See 1 more Smart Citation
“…First and foremost, the findings supported the emerging evidence for the possibility, reliability, and validity of machine assessments of spontaneous speech samples (e.g., r = .799 in Fu et al, 2020;r = .77 in Chen et al, 2018). Furthermore, our study indicated that such automatic scoring can be used to simulate naïve listeners' intuitive judgments of L2 speech comprehensibility, an anchor of communicative success among English speakers in global contexts.…”
Section: Discussionsupporting
confidence: 72%
“…To further increase the accuracy of ASR, another intriguing idea concerns evaluating the quality of L2 speech using L1 and L2 corpus data. For example, the gap among Japanese speakers of English in Fu et al (2020) was assessed using the outcomes from ASR trained on L1 Japanese data and those from L2 English data. L2 English speech with low word-error rate in both L1 Japanese and L2 English ASR (the gap is nil or small), could be considered highly proficient.…”
Section: Phonological Measuresmentioning
confidence: 99%
“…Computer-generated feedback (CGF) employing ASR gives supplemental information in addition to CMF (Sherafati et al, 2020 ). For example, Fu et al ( 2020 ) have developed an ASR-based system for estimating the similarity of English pronunciation between Japanese speakers and native English speakers using Deep Learning. ASR can also be extended for EEC by converting submitted audio answers to raw text, marking an erroneous pronunciation so that the error can be recognized by the students (Kataoka et al, 2019 ), because ASR is promising to enhance students’ pronunciation improvement (Bajorek, 2017 ; Cucchiarini et al, 2009 ).…”
Section: Discussionmentioning
confidence: 99%
“…Deep learning is an artificial intelligence (AI) mathematical technique for classification that depends on data using a multilayered neural network. Computer-assisted language learning (CALL) and computer-assisted pronunciation training (CAPT) [3][4][5][6][7][8] have gained much attention in the field of language teaching and training. CALL and CAPT systems are widely used to improve language learning and teaching methods.…”
Section: Motivation To Solve Pronunciation Problem Of Thai Vowelsmentioning
confidence: 99%
“…Therefore, this research aims to design and develop an automatic CAPT system using a deep learning structure for Thai vowel speech recognition. This system is developed for solving the problems of practicing the pronunciation of Thai vowels for (1) nonnative learners or nonstandard Thai speakers and (2) people with pronunciation disabilities; (3) for solving the shortage of specialists in teaching Thai vowels pronunciation; (4) for solving the original process, which is complicated and time-consuming and does not present results in real time; and (5) inventing a new tool for learning languages online that is appropriate for the current situation.…”
Section: Contributions In Automatic Thai Vowels Pronunciation Recogni...mentioning
confidence: 99%