1999
DOI: 10.1002/j.2168-9830.1999.tb00415.x
|View full text |Cite
|
Sign up to set email alerts
|

Designing Sound Scoring Criteria for Assessing Student Performance

Abstract: Assessment of student performance has become a fundamental aspect of teaching and learning and a key task for engineering educators under new ABET (Accreditation Board for Engineering and Technology) engineering accreditation requirements. Assessment of performance also provides new challenges for many faculty. The purpose of this paper is to fill a void in the literature and assist faculty to meet part of the performance assessment development challenge. Specifically, this paper focuses on a critical feature … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
21
0

Year Published

2002
2002
2020
2020

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 25 publications
(21 citation statements)
references
References 3 publications
0
21
0
Order By: Relevance
“…An effective approach is to identify aspects of the product or presentation to be rated (e.g., for grading project or laboratory reports, the aspects might be technical soundness, organization, thoroughness of discussion, and quality of writing), select a weighting factor for each aspect, and construct a rubric-a form on which the evaluator assigns numerical ratings to each specified aspect and then uses the specified weighting factors to compute an overall rating. Trevisan et al [85] offer suggestions regarding the effective design and use of rubrics, including a recommendation that the characteristics of the highest and lowest ratings and the midpoint rating for each feature be spelled out fairly explicitly. If several raters complete forms independently and then reconcile their ratings, the result should be very reliable, and the reliability can be increased even further by giving raters preliminary training on sample products or videotaped presentations.…”
Section: Assessing Learningmentioning
confidence: 99%
“…An effective approach is to identify aspects of the product or presentation to be rated (e.g., for grading project or laboratory reports, the aspects might be technical soundness, organization, thoroughness of discussion, and quality of writing), select a weighting factor for each aspect, and construct a rubric-a form on which the evaluator assigns numerical ratings to each specified aspect and then uses the specified weighting factors to compute an overall rating. Trevisan et al [85] offer suggestions regarding the effective design and use of rubrics, including a recommendation that the characteristics of the highest and lowest ratings and the midpoint rating for each feature be spelled out fairly explicitly. If several raters complete forms independently and then reconcile their ratings, the result should be very reliable, and the reliability can be increased even further by giving raters preliminary training on sample products or videotaped presentations.…”
Section: Assessing Learningmentioning
confidence: 99%
“…October 2003 they are widely used because they provide a convenient measure with some validity [24,27,28]. Objective measures, which at first glance appear much more direct and definitive, are not simple to obtain and have important disadvantages (i.e., availability, length, cost, and relevance) [27,29]. Problems with observer bias create the most important issue when considering direct measurements because ensuring truly objective data requires careful and complicated experimental design, perhaps even involving "double blind" type studies.…”
Section: Journal Of Engineering Educationmentioning
confidence: 99%
“…Specifically, differences in scoring between or among raters detract from consistency, and therefore, lower reliability estimates. However, work on rater scoring of performance assessments, essays, and constructed response tasks, has shown that high reliability can be obtained 6,7 . Requirements include clear scoring criteria, decision rules, and rater training.…”
Section: Establishing Reliabilitymentioning
confidence: 99%