BackgroundAdmission to medical school is one of the most highly competitive entry points in higher education. Considerable investment is made by universities to develop selection processes that aim to identify the most appropriate candidates for their medical programs. This paper explores data from three undergraduate medical schools to offer a critical perspective of predictive validity in medical admissions.MethodsThis study examined 650 undergraduate medical students from three Australian universities as they progressed through the initial years of medical school (accounting for approximately 25 per cent of all commencing undergraduate medical students in Australia in 2006 and 2007). Admissions criteria (aptitude test score based on UMAT, school result and interview score) were correlated with GPA over four years of study. Standard regression of each of the three admissions variables on GPA, for each institution at each year level was also conducted.ResultsOverall, the data found positive correlations between performance in medical school, school achievement and UMAT, but not interview. However, there were substantial differences between schools, across year levels, and within sections of UMAT exposed. Despite this, each admission variable was shown to add towards explaining course performance, net of other variables.ConclusionThe findings suggest the strength of multiple admissions tools in predicting outcomes of medical students. However, they also highlight the large differences in outcomes achieved by different schools, thus emphasising the pitfalls of generalising results from predictive validity studies without recognising the diverse ways in which they are designed and the variation in the institutional contexts in which they are administered. The assumption that high-positive correlations are desirable (or even expected) in these studies is also problematised.
Health website]. December 7, 2018. https ://www1.health.gov.au/inter net/main/ publi shing.nsf/Conte nt/stron ger-rural-health-strat egy-the-murray-darli ng-medic al-schoo ls-network. Accessed 31 December, 2019. 16. Worley P, Rosenfield J. A vision for LICs and nations. In: Poncelet A, Hirsh D, eds. Longitudinal Integrated Clerkships: Principles, Outcomes, Practical Tools, and Future Directions. Syracuse, NY: Gegensatz Press; 2016:293-304. 17. Walters L, Greenhill J, Richards J, et al. Outcomes of longitudinal integrated clinical placements for students, clinicians and society.
Low-stakes assessment is supposed to improve educational practice by providing feedback to different actors in educational systems. However, the process of assessment from design to the point of a final impact on student learning outcomes is complex and diverse. It is hard to identify reasons for substandard achievement on assessments, let alone the means that should be undertaken to improve the educational setting. Furthermore, it is difficult to show a causal link between educational reforms and change in test achievement over time. This paper examines the potential impact that low-stakes testing initiatives have in engineering higher education on educational improvement by examining two case studies. It discusses how the design of constructs and particular assessment conditions may foster or limit educational improvement and outlines conditions to ensure that low-stakes assessment designs achieve educational improvement. IntroductionHigher education continues to grow in significance and scale across the globe. As the number of programmes and graduates increases, there is a clear rationale for ensuring sufficient quality graduates who have acquired the necessary competencies to enter the workforce. Coupled with this push, there is increased regulation in the sector, with more regulatory bodies requiring more quantitative evidence that university and vocational graduates are reaching minimum standards.In order to react appropriately to these global and national developments, lowstakes assessment is intended to be a means for improving educational practice by feeding back information on learning results to different actors in educational systems. However, the process from assessment design to the test administration, to the analysis and reporting of results, followed by the respective reaction of educational stakeholders, to the point of a final impact on student learning outcomes, is complex and diverse. It is hard to identify reasons for substandard achievement on assessments, let alone the means that should be undertaken to improve the educational setting (Blaich and Wise 2011). Equally, it is difficult to show a causal link between educational reforms and change in test achievement over time (Black and William 1998).
The intention here is to provide guidance for others (within the medical education community and those in other disciplines) with similar intentions and aims, by outlining the developmental pathway of the project and the systematic lessons that the collaboration team has learnt in establishing AMAC.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.