In this paper we report the outcomes of a national survey of academic development staff in a range of UK HE Institutions to consider the approaches adopted to evaluate teaching-related CPD. Despite the increasing drive towards accountability, the majority of respondents undertook no benchmarking to establish existing knowledge, there was minimal use of existing data sets, and few evaluated provision longitudinally. We argue that in order to arrive at an evidence-informed approach, evaluation and teaching-related CPD must be clearly conceptualised, and aligned with institutional priorities. The involvement of students to staff CPD could also be usefully explored.
The professionalisation of teaching is of increasing importance in United Kingdom higher education due to a number of converging processes including the ongoing proliferation of managerialism, increasing quality agendas and changes to student fee structures. These changes have brought into sharp relief the need for greater understanding of how quality teaching evolves in university settings. One key element of this involves academic development and its impacts on teaching and learning. Current literature in this area suggests that a plethora of ideas, frameworks and instruments claiming best practice exist (Hughes et al., 2016) but that take-up of these is inconsistent across the sector (Bamber, 2013). This prompted a Higher Education Academy (HEA) funded national research project which resulted in an evidence-based toolkit for evaluating academic development specifically within the UK context (Kneale, Winter, Spowart, Turner, & Muneer, 2016a). As part of the toolkit augmentation, academic development representatives from 12 Higher Education providers were asked to create, review and test uniquely tailored evaluation instruments from a core of pre-selected questions based on Guskey's (2002) critical levels of evaluation. These instruments were then piloted on university teachers who had participated in teaching-related continuing professional development activities. This paper reports on these individuals' reflections of using the toolkit. It suggests that academic developers are interested in evaluating the impact of their work on a range of subjects; teachers, students and on the wider institutional culture but that confidence and expertise varies. Using the toolkit provided 'traditional' evaluation data for example satisfaction with the development activity and changes to lecturers' conceptions and behaviours. However, it also prompted important and timely discussions around current evaluation practice, including the urgent need for transformational reform of institutional culture to support potential links between evaluation of teaching and good standing; and helped to make more explicit the thorny issue of evidencing student learning. This paper will be of interest primarily to those involved with academic development and its evaluation. However, the findings are relevant to all those with an interest or responsibility for evaluating teaching in a higher education context. The paper offers an important contribution to the international literature when higher education globally is faced with increasingly demanding questions about teaching, learning and quality. Evaluation, and how to do it well, is timely and important business.
Learning gain is a politicised concept within contemporary HE, and as such has been aligned with agendas of teaching excellence and learning outcomes but the extent to which it captures actual learning has yet to be clarified. Here, we report the outcomes of a learning gain study which examines how students' knowledge, skills and experiences as researchers develops throughout their studies. We examine data from a self-reporting survey administered across a university and college-based HE providers during students' second year of undergraduate study. The data highlight disciplinary differences in student engagement with research methods and the significance of perceived relevance of research methods to students' learning. These findings do have a bearing on the development of measures of learning gain as they are demonstrating the complexity of capturing student learning across disciplines. Our findings can be employed to develop a method of capturing learning gain that can be integrated into undergraduates' research methods education.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.