1997
DOI: 10.1016/s0191-491x(97)86212-8
|View full text |Cite
|
Sign up to set email alerts
|

An instrumental response to the instrumental student: Assessment for learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2000
2000
2012
2012

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 6 publications
0
5
0
Order By: Relevance
“…A plausible explanation may be found in the conflicting nature of the student-activating teaching methods and the conventional closed-book multiple-choice test that followed instruction; the case of an unsuccessful constructive alignment (Biggs, 1996). Due to the incompatibility of instructional and assessment methods (Askham, 1997;Biggs, 1996;Shepard, 2000), one might expect that if students are to evaluate their course experiences, such as goals and standards, generic skills, independence, workload, and good teaching, that ambiguity and conflicting thoughts on ''the course'' (which is considered an entity in the CEQ) are assured and will display insignificant effects and low effect sizes as a consequence. Inversely, the instruction-by-assessment settings that best align learning, instruction and assessment (Biggs, 1996) demonstrate beautifully the positive effects of students' experiences with the course on their appraisal of the teaching in that course.…”
Section: Course Experiencesmentioning
confidence: 96%
“…A plausible explanation may be found in the conflicting nature of the student-activating teaching methods and the conventional closed-book multiple-choice test that followed instruction; the case of an unsuccessful constructive alignment (Biggs, 1996). Due to the incompatibility of instructional and assessment methods (Askham, 1997;Biggs, 1996;Shepard, 2000), one might expect that if students are to evaluate their course experiences, such as goals and standards, generic skills, independence, workload, and good teaching, that ambiguity and conflicting thoughts on ''the course'' (which is considered an entity in the CEQ) are assured and will display insignificant effects and low effect sizes as a consequence. Inversely, the instruction-by-assessment settings that best align learning, instruction and assessment (Biggs, 1996) demonstrate beautifully the positive effects of students' experiences with the course on their appraisal of the teaching in that course.…”
Section: Course Experiencesmentioning
confidence: 96%
“…Despite a database of over 700 citations, including recent working papers, there are only three papers listed that have a specific interest in assessment practice. Askham (1997) who examines two-way feedback in a portfolio of assessment but from the point of view of general education; Bilen et al (2005) whose main focus is on the programme design of Penn State's engineering entrepreneurship programme where the secondary focus is assessment practice; and, Reid and Petocz (2004) who examine different assessment techniques designed for assessing creativity [2]. While recognising the limitations of both the SLR and the NCGE's bibliographical database it does seem that there is a paucity of work specifically addressing assessment practice in enterprise education published in entrepreneurship journals.…”
Section: Assessment Practice In Education 71mentioning
confidence: 99%
“…While they are likely making a valid point the evidence on which they are able to draw to justify the argument is inherently limited as there is little existing research on what entrepreneurship educators actually “do” when engaging in assessment. For example, few studies appear to address assessment practice to a significant degree (MacFarlane and Tomlinson, 1993; Askham, 1997; Reid and Petocz, 2004; Bilen et al , 2005; Pittaway et al , 2009; Penaluna and Penaluna, 2009). It seems that while researchers can debate assessment in entrepreneurship education based on a disciplinary or pedagogic point of view, this debate may have little value unless researchers become more aware about the actual practices used.…”
Section: Unpicking Assessment Practicementioning
confidence: 99%