2020
DOI: 10.5539/ijel.v10n6p30
|View full text |Cite
|
Sign up to set email alerts
|

An Evaluation of China’s Automated Scoring System Bingo English

Abstract: The study evaluated the effectiveness of Bingo English, one of the representative automated essay scoring (AES) systems in China. 84 essays in an English test held in a Chinese university were collected as the research materials. All the essays were scored by both two trained and experienced human raters and Bingo English, and the linguistic features of them were also quantified in terms of complexity, accuracy, fluency (CAF), content quality, and organization. After examining the agreement between human score… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 31 publications
(51 reference statements)
1
3
0
Order By: Relevance
“…And the results also show that the automated corrective feedback of Pigai.org could help L2 learners correct the POS errors in the revised drafts and the posttest, so that this study provides evidence that automated corrective feedback of Pigainet has been vastly influential in helping L2 learners improve their accurate production immediately and over time in terms of POS types, especially in article, verb, preposition, and noun. The results confirm the previous studies (Gao et al, 2020;Li et al, 2017) that have been investigated that automated corrective feedback, such as Criterion and Bingo, have a positive effect on the learners' writing accuracy in terms of grammar.…”
Section: Discussionsupporting
confidence: 90%
See 1 more Smart Citation
“…And the results also show that the automated corrective feedback of Pigai.org could help L2 learners correct the POS errors in the revised drafts and the posttest, so that this study provides evidence that automated corrective feedback of Pigainet has been vastly influential in helping L2 learners improve their accurate production immediately and over time in terms of POS types, especially in article, verb, preposition, and noun. The results confirm the previous studies (Gao et al, 2020;Li et al, 2017) that have been investigated that automated corrective feedback, such as Criterion and Bingo, have a positive effect on the learners' writing accuracy in terms of grammar.…”
Section: Discussionsupporting
confidence: 90%
“…
Automated corrective feedback is the processing of the Computer-Assisted Language Learning used in L2 English writing assessment that is ubiquitous in current L2 practice and research (e.g., Chen,2016; Chukharev & Saricaoglu,2016;Gao et al, 2020).This research examined the effect of automated corrective feedback of Pigainet as a kind of Computer-Assisted language learning instruments in English writing revision. Data were collected from 591 drafts of 31 participants who submitted their drafts on Pigainet and coded as errors frequency ratios according to POS category.
…”
mentioning
confidence: 99%
“…So far scant attention has been given to the scoring validity of these systems, and only a handful of studies have dealt with this area, involving both construct representation and score association. The investigated systems include Write On (Wang, 2012), Bingo English (Gao et al, 2020), Pigai (He, 2013;Wang, 2016;Zhang, 2017;Bai & Wang, 2018;Xu, 2018), and iWrite (Li & Tian, 2018;Qian et al, 2020). Wang (2012) investigated the scoring validity of Write On, an AWE system exclusively designed for the course New Horizon College English.…”
Section: Research On the Scoring Validity Of Chinese English Awe Systemsmentioning
confidence: 99%
“…But the conclusion was just drawn from the general comments made by the system and the linguistic features of the sample essays were not investigated. Gao et al (2020) evaluated the scoring effectiveness of Bingo English, revealing low human-machine agreement (exact agreement rate = 13.10%, EPAA rate = 35.52%) and moderate correlation (Pearson's r = .519). This study also examined the correlation of human and machine scores with the indicators of the essays' linguistic features in terms of complexity, accuracy, fluency, content and organization, and found that machine scores could partially reflect the essays' quality.…”
Section: Research On the Scoring Validity Of Chinese English Awe Systemsmentioning
confidence: 99%