1991
DOI: 10.1037/0021-9010.76.4.522
|View full text |Cite
|
Sign up to set email alerts
|

A methodology for scoring open-ended architectural design problems.

Abstract: Psychometric and architectural principles were integrated to create a general approach for scoring open-ended architectural site-design test problems. In this approach, solutions are examined and described in terms of design features, and those features are then mapped onto a scoring scale by means of scoring rules. This methodology was applied to two problems that had been administered as part of a national certification test. Because the test is not currently administered by computer, the paper-and-pencil so… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
37
0
1

Year Published

2003
2003
2021
2021

Publication Types

Select...
4
3
3

Relationship

2
8

Authors

Journals

citations
Cited by 59 publications
(38 citation statements)
references
References 12 publications
0
37
0
1
Order By: Relevance
“…In the context of the TPO assessment, the specific implication is of whether the examinee responses are obtained and scored appropriately so that the resulting score accurately represents the quality of the performance. For the specific question of use of automated scoring, there is a rich precedence of studies comparing the results of automated and human scoring as the basis for evaluating the validity of automated scores from the Evaluation perspective (e.g., Bejar, 1991;Bennett, Sebrechts, & Marc, 1994;Braun, Bennett, Frye, & Soloway, 1990;Clauser et al, 1995;Clauser et al, 1997;Kaplan & Bennett, 1994;Page & Petersen, 1995;Sebrechts, Bennett, & Rock, 1991;Williamson et al, 1999). It is generally acknowledged that human scores are not perfect and therefore may not be an ideal basis for evaluation of the quality of automated scoring, suggesting that the term gold standard may be a misnomer.…”
Section: The Validity Argument For Scoring Tpo With Speechrater V10mentioning
confidence: 99%
“…In the context of the TPO assessment, the specific implication is of whether the examinee responses are obtained and scored appropriately so that the resulting score accurately represents the quality of the performance. For the specific question of use of automated scoring, there is a rich precedence of studies comparing the results of automated and human scoring as the basis for evaluating the validity of automated scores from the Evaluation perspective (e.g., Bejar, 1991;Bennett, Sebrechts, & Marc, 1994;Braun, Bennett, Frye, & Soloway, 1990;Clauser et al, 1995;Clauser et al, 1997;Kaplan & Bennett, 1994;Page & Petersen, 1995;Sebrechts, Bennett, & Rock, 1991;Williamson et al, 1999). It is generally acknowledged that human scores are not perfect and therefore may not be an ideal basis for evaluation of the quality of automated scoring, suggesting that the term gold standard may be a misnomer.…”
Section: The Validity Argument For Scoring Tpo With Speechrater V10mentioning
confidence: 99%
“…Appearing in this decade were ETS's first attempts at automated scoring, including of computer science subroutines (Braun et al 1990), architectural designs (Bejar 1991), mathematical step-by-step solutions and expressions (Bennett et al 1997;Sebrechts et al 1991), short-text responses (Kaplan 1992), and essays (Kaplan et al 1995). By the middle of the decade, the work on scoring architectural designs had been implemented operationally as part of the National Council of Architectural Registration Board's Architect Registration Examination (Bejar and Braun 1999).…”
Section: Constructed-response Formats and Performance Assessmentmentioning
confidence: 99%
“…Of course, automated scoring is straightforward for multiple-choice tests, but it has also been used to assess complex skills in a variety of domains, such as architectural design (e.g., Bejar, 1991) and physician patient management (e.g., Clauser et al, 1997). To use automated scoring, tests would need to be administered on computer systems (which also promotes scalability of test administration).…”
Section: Develop and Implement Instruments To Assess Proficiencymentioning
confidence: 99%