Handbook of Test Development
DOI: 10.4324/9780203874776.ch14
|View full text |Cite
|
Sign up to set email alerts
|

Innovative Item Formats in Computer-Based Testing: In Pursuit of Improved Construct Representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
56
0
2

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(59 citation statements)
references
References 0 publications
1
56
0
2
Order By: Relevance
“…the innovative formats may facilitate more authentic and direct measurement of knowledge, skills, and abilities (KSA) than the MC format allows (Parshall et al, 2002;Scalise & Gifford, 2008;Sireci & Zenisky, 2006). The innovative formats may expand the content coverage of a testing program through the use of multi-media (Swygert & Contreras, 2001;Vispoel, Wang, & Bleiler, 1997).…”
mentioning
confidence: 97%
See 1 more Smart Citation
“…the innovative formats may facilitate more authentic and direct measurement of knowledge, skills, and abilities (KSA) than the MC format allows (Parshall et al, 2002;Scalise & Gifford, 2008;Sireci & Zenisky, 2006). The innovative formats may expand the content coverage of a testing program through the use of multi-media (Swygert & Contreras, 2001;Vispoel, Wang, & Bleiler, 1997).…”
mentioning
confidence: 97%
“…For example, computer literacy may be a source of construct irrelevant variance, and the lack of research and guidance on writing innovative items poses significant challenges for the widespread implementation of the novel formats (Huff & Sireci, 2001;Sireci & Zenisky, 2006). Zenisky and Sireci (2002) reviewed 21 computerized innovative item types.…”
mentioning
confidence: 99%
“…Accordingly, scoring is included in the framework of mode effects for the NEPS. In general, CBT enables automatic scoring for a variety of selectedresponse formats (e.g., multiple-choice) and for some simple constructed-response formats (e.g., short text answers, text highlighting, see Sireci and Zenisky 2006). For other response modes (e.g., complex essay scoring), automatic scoring is still under research (e.g., Haberman and Sinharay 2010).…”
Section: Test Scoringmentioning
confidence: 98%
“…Computerized assessments bring many advantages to examinees compared to more conventional paper-based assessments. For instance, computers support the creation of new alternative item types and innovative item formats (Sireci and Zenisky 2006); items on computer-based tests can be scored immediately to offer immediate feedback to the examinees (Drasgow and Mattern 2006); and computers also allow on-demand testing (van der Linden and Glas 2010). However, the most significant benefit of computerized assessment is that it permits examiners to assess more complex performances of students via integrative test items which may include digital media to increase the types of skills, knowledge, and competencies that can be evaluated (Bartram 2006).…”
Section: Computer Adaptive Testingmentioning
confidence: 99%