Computerized Adaptive Testing: Theory and Practice 2000
DOI: 10.1007/0-306-47531-6_7
|View full text |Cite
|
Sign up to set email alerts
|

Innovative Item Types for Computerized Testing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
49
0
3

Year Published

2002
2002
2017
2017

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 73 publications
(53 citation statements)
references
References 20 publications
1
49
0
3
Order By: Relevance
“…Items and tasks like those highlighted above have become the subject of considerable research as testing on computer has become increasingly practical and popular (Clauser, 1995;Haladyna, 1996;Huff & Sireci, 2001;Parshall et al, 2010). A wide assortment of options are therefore now available for using a computer to present information, facilitate interaction, and collect responses in ways not possible with traditional textbased items.…”
Section: As With All Trips We Need To Start By Deciding Whether the mentioning
confidence: 99%
“…Items and tasks like those highlighted above have become the subject of considerable research as testing on computer has become increasingly practical and popular (Clauser, 1995;Haladyna, 1996;Huff & Sireci, 2001;Parshall et al, 2010). A wide assortment of options are therefore now available for using a computer to present information, facilitate interaction, and collect responses in ways not possible with traditional textbased items.…”
Section: As With All Trips We Need To Start By Deciding Whether the mentioning
confidence: 99%
“…Research showed that in videoconference interviews candidate ratings and applicant reactions are therefore lower (Chapman,Uggerslev,& 4 These categories represent broad categories and finer distinctions are possible. One such distinction pertains to the medium for conveying the stimuli (Parshall et al, 2000;Potosky, 2008). For instance, textual and pictorial stimuli might be presented via a paper-and-pencil or computerized medium (PC, tablet, smartphone, etc.).…”
Section: Stimulus Formatmentioning
confidence: 99%
“…Here no interpretative leaps are required because an a priori scoring key (determined via empirical keying, theoretical keying, expert keying or a combination of those) is applied for evaluating candidates. Automated scoring is typically done via computer algorithms, which might vary from simple (dichotomous) to complex (e.g., polytomous or partial credit scoring systems where answers are scored on a number of weighted criteria, Parshall et al, 2000). Automated scoring applies not only to ability tests, biodata, personality scales or SJTs, but also to essays and simulations (see Clauser, Kane, & Swanson, 2002).…”
Section: Response Evaluation Consistencymentioning
confidence: 99%
“…The use of technological enhancements of sound, graphics, animation, video or t h e i n c o r p o r a t i o n o f media can be utilized also for e-learning assessment designs [6]. Figure 1 below shows the summary of the 13 questions types collected from 15 various sources of scientific research and publications and used as guiding tool in designing questionnaires.…”
Section: E-learning Assessment Questionsmentioning
confidence: 99%