1988
DOI: 10.2307/1170281
|View full text |Cite
|
Sign up to set email alerts
|

The Impact of Classroom Evaluation Practices on Students

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

7
244
0
34

Year Published

1996
1996
2022
2022

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 215 publications
(285 citation statements)
references
References 0 publications
7
244
0
34
Order By: Relevance
“…Second, research suggests that enactment of the range of complementary assessment practices is relatively rare in schools (Supovitz & Klein, 2003), even while use of formative and benchmark assessments has been shown to be powerfully associated with gains in student learning (Black & Wiliam, 1998;Crooks, 1988). Traditional understandings of the use of assessment dominate teacher practice, while using assessment as a motivational and instructional tool is a relatively new concept for most teachers (Butler, 1988;Dweck, 2001).…”
Section: Evaluation Questions and Rationalementioning
confidence: 99%
“…Second, research suggests that enactment of the range of complementary assessment practices is relatively rare in schools (Supovitz & Klein, 2003), even while use of formative and benchmark assessments has been shown to be powerfully associated with gains in student learning (Black & Wiliam, 1998;Crooks, 1988). Traditional understandings of the use of assessment dominate teacher practice, while using assessment as a motivational and instructional tool is a relatively new concept for most teachers (Butler, 1988;Dweck, 2001).…”
Section: Evaluation Questions and Rationalementioning
confidence: 99%
“…The primacy of the learner thus suggests, counterintuitively, that learning tasks, rather than being overly strucured by tutors to minimise the possibility of learner error, might be more useful if they were complex or difficult since expertise does not develop through doing only what is within one's competence but through working on real problems that require the extension of knowledge and competence (Scardamalia & Bereiter, 1991). At a pragmatic level, it might also be helpful to focus on a few significant learning tasks for which the learners are accountable, and thereby communicate to the learners the kind of intellectual work which is valued (Crooks, 1988;Gibbs, 1999;Ramsden, 1997), rather than having a multitude of learning tasks which may have little or no alignment with the desired learning outcomes (Biggs, 1999). For example, it would seem important that tutors give primacy to the need for students to conceptualise new knowledge about assessment and so in an iterative fashion, tasks that require the comprehension, transformation and expansion of knowledge about assessment (Sarig, 1996) might better enable students to appreciate that the analyses of, and solutions to, problems of assessment are real and immediate issues with which they have to grapple in their professional practice.…”
Section: What Might Be Done?mentioning
confidence: 99%
“…[19][20][21] Messic and others suggest that learning is a consequence of assessment, and that an assessment program can be designed not only to measure competence but also to generate desirable outcomes. [22][23][24] Whereas OSCE-style task assessments have been used historically to determine if specific skills have been mastered in KU pharmacy teaching laboratories, adding a live, whole-case standardized client program has afforded evaluation of problem solving, which is a higher-level composite set of these skills.…”
Section: Introductionmentioning
confidence: 99%