2008
DOI: 10.1016/j.compedu.2008.01.006
|View full text |Cite
|
Sign up to set email alerts
|

Assessing creative problem-solving with automated text grading

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
43
0
1

Year Published

2011
2011
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 69 publications
(45 citation statements)
references
References 24 publications
1
43
0
1
Order By: Relevance
“…Many studies also note that it is difficult to develop multiple-choice test items to assess higher cognitive skills Hsiao et al, 2014;Wang et al, 2008). These studies support the finding of the post test outcome of the current study, where the concept mapping strategy was more likely to promote higher order abilities.…”
Section: Discussionsupporting
confidence: 80%
See 1 more Smart Citation
“…Many studies also note that it is difficult to develop multiple-choice test items to assess higher cognitive skills Hsiao et al, 2014;Wang et al, 2008). These studies support the finding of the post test outcome of the current study, where the concept mapping strategy was more likely to promote higher order abilities.…”
Section: Discussionsupporting
confidence: 80%
“…In contrast, open-ended questions that elicit constructed responses and give students a higher degree of freedom in reasoning may be a better foundation to evaluate higher-order thinking (Chang, & Barufaldi, 2010;Ilhan, Sozbilir, Sekerci, & Yildirim, 2015;Wang, Chang, & Li, 2008;Yeh et al, 2012). Many studies also note that it is difficult to develop multiple-choice test items to assess higher cognitive skills Hsiao et al, 2014;Wang et al, 2008).…”
Section: Discussionmentioning
confidence: 99%
“…Guidance that encourages students to engage conceptually needs careful formulation, framing, and delivery (Havnes et al 2012). Using computers to assess responses and provide automated guidance has advantages for improving students' explanations, including longer wait times for students to construct better explanations (Swift and Gooding 1983), responsiveness that can be motivating (Van der Kleij et al 2012), privacy for students to make mistakes (Scalise et al 2011), and usually good agreement with human scores (Landauer et al 2003;Page 2003;Wang et al 2008). Some students view computer-based guidance as fairer than human-based guidance (Lipnevich and Smith 2009).…”
Section: Designing Guidance For Explanationsmentioning
confidence: 97%
“…In [27] the CarmelTC algorithm, that uses a Naïve Bayes classifier, is proposed. On the other hand, [28] …”
Section: Related Workmentioning
confidence: 99%