1995
DOI: 10.1007/bf02941040
|View full text |Cite
|
Sign up to set email alerts
|

An approach to designing computer-based evaluation of student constructed responses: Effects on achievement and instructional time

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

1997
1997
2019
2019

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…Furthermore, guessing or random marking may lead to unjust high scores (Funk, & Dickson, 2011). Generally, open answers are found to be more informative, because compared to conventional MC tests guessing is minimized and the correct solution cannot be derived by successive elimination (Gibbs, 1995). In a study about vocabulary learning, Heim and Watts found significant differences between multiplechoice tests and open-ended scores, the former being substantially higher than the latter.…”
Section: Multiple Choice Tests Versus Open Ended Questionsmentioning
confidence: 99%
“…Furthermore, guessing or random marking may lead to unjust high scores (Funk, & Dickson, 2011). Generally, open answers are found to be more informative, because compared to conventional MC tests guessing is minimized and the correct solution cannot be derived by successive elimination (Gibbs, 1995). In a study about vocabulary learning, Heim and Watts found significant differences between multiplechoice tests and open-ended scores, the former being substantially higher than the latter.…”
Section: Multiple Choice Tests Versus Open Ended Questionsmentioning
confidence: 99%
“…The educational value of such assessment instruments remains controversial. "The primary criterion for selection of a question type… should not be the ease with which the response to the question gets evaluated by the computer but rather the type of learning the question is designed to assess" (Gibbs and Peck, 1995). Certainly for information technology assessment, multiple choice and structured questions do not provide authentic performance assessment which requires "an active demonstration of the knowledge in question, as opposed to talking or writing about it" (Biggs, 1999).…”
Section: Automated Assessmentmentioning
confidence: 99%
“…There has been a growing interest to use computer‐based applications in many areas of human's daily life during the last decade including education systems . This causes changes in workflows of many tasks that are performed manually in the past.…”
Section: Introductionmentioning
confidence: 99%