One of the implicit aims of higher education is to enable students to become better judges of their own work. This paper examines whether students who voluntarily engage in selfassessment improve in their capacity to make those judgements. The study utilizes data from a web-based marking system that provides students with the opportunity to assess themselves on each criterion for each assessment task throughout a program of study. Student marks were compared with those from tutors to plot changes over time. The findings suggest that overall students' judgements do converge with those of tutors, but that there is considerable variation across achievement levels, with weaker students showing little improvement. While the study is limited by the exigencies of voluntary participation and thus consequential gaps in the data set, it shows how judgement over time can be demonstrated and points to the potential for more systematic interventions to improve students' judgements. It also illustrates the use of the web-based marking and feedback software (ReView) that has considerable utility in aiding self-assessment research.
Can extended opportunities for self-assessment over time help students develop the capacity to make better judgements about their work? Using evidence gathered through students' voluntary self-assessment of their performance with respect to assessment tasks in two different disciplines at two Australian universities, the paper focuses on the effects of sequences of units of study and the use of different types of assessment task (written, oral, analysis, project) in the development of student judgement. Convergence between student criteria-based gradings of their own performance in units of study and those allocated by tutors was analysed to explore the calibration of students' judgement over time. First it seeks to replicate analyses from an earlier smaller-scale study to confirm that students' judgements can be calibrated through continuing opportunities for self-assessment and feedback. Second, it extends the analysis to coherently designed sequences of units of study and explores the effects of different types of assessment. It finds that disruptive patterns of assessment within a sequence of subjects can reduce convergence between student and tutor judgements.
4This work was part of an ALTC Funded Project -Facilitating staff and student engagement with graduate attribute development, assessment and standards. The project team would like to acknowledge the work of the teaching team of subject (Dr Peter Docherty & Mr Harry Tse) for their contribution to this study. AbstractSelf-assessment can be conceptualised as the involvement of students in identifying assessment criteria and standards that they can apply to their work in order to make judgements about whether they have met these criteria (Boud, 1995). It is a process that promotes student learning rather than just grade allocation. However, self-assessment does not have obvious face validity for students; and many students find that making an objective assessment of their work difficult (Lindblom-ylanne, Pihlajamak & Kotkas, 2006). Previous business education research has also found that self-assessment does not closely reflect either peer or instructor assessments (Campbell, et al., 2001).The current study aimed to explore: (a) the relationship between self-assessment grading and teacher assessment; and (b) the effect of self-assessment in engaging students with graduate attributes, in order to explore the tenets of self-assessment This process of self-assessment was investigated through application of an online assessment system, ReView, to encourage more effective self-assessment in business education. Data collected from two groups (student and teacher) demonstrated that: (1) initial self-assessment results between the teaching academics and the students' self-assessment, were significantly different with students overestimating their ability on every criterion; (2) however, the variation diminished with time to the point that there was no significant difference between the two assessments; and (3) students' awareness of the graduate attributes for their degree program increased from the beginning to the end of the subject (Note 1).
Assurance of learning (AOL) is a quality enhancement and quality assurance process used in higher education. It involves a process of determining programme learning outcomes and standards, and systematically gathering evidence to measure students' performance on these. The systematic assessment of whole-of-programme outcomes provides a basis for curriculum development and management, continuous improvement, and accreditation. To better understand how AOL processes operate, a national study of university practices across one discipline area, business and management, was undertaken. To solicit data on AOL practice, interviews were undertaken with a sample of business school representatives (n = 25). Two key processes emerged: (1) mapping of graduate attributes and (2) collection of assurance data. External drivers such as professional accreditation and government legislation were the primary reasons for undertaking AOL outcomes but intrinsic motivators in relation to continuous improvement were also evident. The facilitation of academic commitment was achieved through an embedded approach to AOL by the majority of universities in the study. A sustainable and inclusive process of AOL was seen to support wider stakeholder engagement in the development of higher education learning outcomes.The rigour and relevance of tertiary business education standards has been a matter of much recent discontent and debate. Martell and Caldron (2009) cited growing public dissatisfaction with the quality of US college education, and Bennis and O'Toole (2004) have highlighted how MBA programmes have been criticised for failing to develop skills and values in graduates, criticisms that have arisen not only from students, employers and the media but also from deans of prestigious business schools. A recent UK government white paper, Students at the heart of the system (2011), set out the quality challenges of a changing higher education environment, recognising the need to strengthen the processes and to adapt and reinforce systems to improve practice. In Australia, the recently established regulatory body, Tertiary Education Quality
Professional Learning has become a feature of business curricula in universities around the world. The term "professional learning" (PL) is often used to encapsulate dimensions of educational programs that highlight contemporary industry issues explicitly linked to industry and professional bodies. PL encompasses the skills, qualities and attributes that are required by industry and the processes through which those skills are acquired. It encourages deep learning in relation to the student's future profession, and includes industry engagement, www.ccsenet.org/ass Asian Social Science Vol. 7, No. 4; April 2011 ISSN 1911-2017 E-ISSN 1911-2025 62 work-integrated learning and authentic learning environments.This paper describes a typology of approaches to PL derived from a national study of good practices in business faculties in Australian universities. It identifies the enablers and impediments to the successful adoption of PL, and discusses the challenges associated with industry engagement in PL for academics, students and business organisations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.