Evidence from 73 programmes in 14 U.K universities sheds light on the typical student experience of assessment over a three-year undergraduate degree. A previous small-scale study in three universities characterised programme assessment environments using a similar method. The current study analyses data about assessment patterns using descriptive statistical methods, drawing on a large sample in a wider range of universities than the original study. Findings demonstrate a wide range of practice across programmes: from 12 summative assessments on one programme to 227 on another; from 87% by examination to none on others. While variations cast doubt on the comparability of U.K degrees, programme assessment patterns are complex. Further analysis distinguishes common assessment patterns across the sample. Typically, students encounter eight times as much summative as formative assessment, a dozen different types of assessment, more than three quarters by coursework. The presence of high summative and low formative assessment diets is likely to compound students' gradeorientation, reinforcing narrow and instrumental approaches to learning. High varieties of assessment are probable contributors to student confusion about goals and standards. Making systematic headway to improve student learning from assessment requires a programmatic and evidence-led approach to design, characterised by dialogue and social practice.
Analytic and holistic marking are typically researched as opposites, generating a mixed and inconclusive evidence base. Holistic marking is low on content validity but efficient. Analytic approaches are praised for transparency and detailed feedback. Capturing complex criteria interactions, when deciding marks, is claimed to be better suited to holistic approaches whilst analytic rules are thought to be limited. Both guidance and evidence in this area remain limited to date. Drawing from the known complementary strengths of these approaches, a university department enhanced its customary holistic marking practices by introducing analytic rubrics for feedback and as ancillary during marking. The customary holistic approach to deciding marks was retained in the absence of a clear rationale from the literature. Exploring the relationship between the analytic criteria and holistic marks became the focus of an exploratory study during a trial year that would use two perspectives. Following guidance from the literature, practitioners formulated analytic rules drawing on their understanding of the role of criteria, to explain output marks by allocating weightings. Secondly, data derived throughout the year consisting of holistic marks and analytic judgements (criteria) data were analyzed using machine learning techniques (random forests). This study reports on data from essay-based questions (exams) for years 2 and 3 of study (n = 3,436). Random forests provided a ranking of the variable importance of criteria relative to holistic marks, which was used to create criterion weightings (data-derived). Moreover, illustrative decision trees provide insights into non-linear roles of criteria for different levels of achievement. Criterion weightings, expected by practitioners and data-derived (from holistic marks), reveal contrasts in the ranking of top criteria within and across years. Our exploratory study confirms that holistic and analytic approaches, combined, offer promising and productive ways forward both in research and practice to gain insight into the nature of overall marks and relations with criteria. Rather than opposites, these approaches offer complementary insights to help substantiate claims made in favor of holistic marking. Our findings show that analytic may offer insights into the extent to which holistic marking really aligns with assumptions made. Limitations and further investigations are discussed.
E-assessment is an umbrella term that comprises a complex array of tools of varying capacities. This paper focuses on the topic of e-assessment from the perspective of its strategic institutional development in higher education. The paper argues that research on e-assessment has been dominated by a focus on investigating benefits of use and adoption rather than building an understanding of development and implementation. The current paper proposes a qualitative assessment and process-specific framework, both to investigate e-assessment and chart its institutional development. This framework is an annual assessment life cycle, and one case illustrates its use to elaborate an institutional development agenda for e-assessment.The institutional inquiry into e-assessment consisted of interviews with 22 academic staff members using the assessment life cycle. The goal was to identify how technology played a role in assessment in general. The information gathered was used to construct an institutional overview of how electronic and paper-based modes supported assessment.The overview, which used the life cycle framework, revealed a subtle interplay between assessment stakes, type, stages and modes. Initial stages in the assessment life cycle are substantively supported electronically. Middle stages (submission, marking, feedback return) present great complexity and different uses of paper and electronic modes depending on the assessment type. High-stakes summative assessment shows a hybrid process, where both paper and electronic modes fulfil substantive roles in supporting the assessment stages. The later stages of the cycle are mainly paper based regardless of assessment type. Low-stakes e-assessment may be an all-electronic process. This simplified institutional overview of the state of e-assessment and the emphasis on the cyclical nature have helped to elaborate a differentiated development strategy for various e-assessment forms, considering assessment type and particular stages as the foci of development.
At the University of Nottingham peer-assessment was piloted with the objective of assisting students to gain greater understanding of marking criteria so that students may improve their comprehension of, and solutions to, future mathematical tasks. The study resulted in improvement in all four factors of observation, emulation, self-control and self-regulation thus providing evidence of a positive impact on student learning.The pilot involved a large first-year mathematics class who completed a formative piece of coursework prior to a problem class. At the problem class students were trained in the use of marking criteria before anonymously marking peer work. The pilot was evaluated using questionnaires (97 responses) at the beginning and end of the problem class. The questionnaires elicited students’ understanding of criteria before and after the task and students’ self-efficacy in relation to assessment self-control and self-regulation.The analysis of students’ descriptions of the criteria of assessment show that their understanding of the requirements for the task were expanded. After the class, explanation of the method and notation (consistent and correct) were much more present in students’ descriptions. Furthermore, 67 per cent of students stated they had specific ideas on how to improve their solutions to problems in the future. Students’ self-perceived abilities to self-assess and improve were positively impacted. The pilot gives strong evidence for the use of peer-assessment to develop students’ competencies as assessors, both in terms of their understanding of marking criteria and more broadly their ability to self-assess and regulate their learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.