Preface A model of curriculum eualrrationThis booklet is the first in a series to appear under the auspices of the Association for the Study of Medical Education Research Comniittee. The intention of these small publications is to introduce teachers and researchers in medical education to current ideas about, and approaches to, curriculum evaluation and educational research.This first booklet 'sets the scene', as it were, for the series as a whole by addressing broad questions about the nature of curriculum evaluation. Future booklets in the series will focus more particularly on the practicalities and methods involved in planning, implementing and acting upon the findings which emerge from an evaluation.In this booklet, w e want to introduce a general model of curriculum evaluation as a basis for the various points we wish to make here. In addition it will also underpin future booklets in the series which will refer to it again. For this reason this booklet is somewhat lengthier than those that follow and takes a broader view.The model of curriculum evaluation proposed is shown in Fig. I .At one level this diagram shows that development of an educational event can occur without any substantial commitment to evalua-
This paper reports a follow-on project that assessed a series of portfolios assembled by a cohort of participants attending a course for prospective general practice trainers. In an attempt to enhance reliability, a framework for defining and addressing problems using a reflective practice model was offered to participants. The reliability of the judgements made by a panel of assessors about individual 'components', together with an overall global judgement about performance were studied. The reliability of individual assessors' judgements (i.e. their consistency) was moderate, but inter-rater reliability did not reach a level that could support making a safe summative judgement. Despite offering a possible structure for demonstrating reflective processes, the levels of reliability reached were similar to the earlier work and other subjective assessments generally, and perhaps reflected individuality of personal agendas of both the assessed and the assessors, and variations in portfolio structure and content; even agreement among the assessors about evidence of the framework being used was poor. Suggestions for approaches in the future are made. The conclusion remains that while portfolios might be valuable as resources for learning, as assessment tools they should be treated as problematic.
This paper reports the reliability in assessments of a series of portfolios assembled by a cohort of participants attending a course for prospective general practice trainers. Initial individual assessments are compared with open discussion between random pairs of assessors to produce paired composite scores, and analysed using kappa statistics. Overall reliability of a global pass/refer judgement improved from a kappa of 0.26 (fair) using individual assessment, to 0.5 (moderate) with paired discussants.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.