BackgroundMRCGP and MRCP(UK) are the main entry qualifications for UK doctors entering general [family] practice or hospital [internal] medicine. The performance of MRCP(UK) candidates who subsequently take MRCGP allows validation of each assessment.In the UK, underperformance of ethnic minority doctors taking MRCGP has had a high political profile, with a Judicial Review in the High Court in April 2014 for alleged racial discrimination. Although the legal challenge was dismissed, substantial performance differences between white and BME (Black and Minority Ethnic) doctors undoubtedly exist. Understanding ethnic differences can be helped by comparing the performance of doctors who take both MRCGP and MRCP(UK).MethodsWe identified 2,284 candidates who had taken one or more parts of both assessments, MRCP(UK) typically being taken 3.7 years before MRCGP. We analyzed performance on knowledge-based MCQs (MRCP(UK) Parts 1 and 2 and MRCGP Applied Knowledge Test (AKT)) and clinical examinations (MRCGP Clinical Skills Assessment (CSA) and MRCP(UK) Practical Assessment of Clinical Skills (PACES)).ResultsCorrelations between MRCGP and MRCP(UK) were high, disattenuated correlations for MRCGP AKT with MRCP(UK) Parts 1 and 2 being 0.748 and 0.698, and for CSA and PACES being 0.636.BME candidates performed less well on all five assessments (P < .001). Correlations disaggregated by ethnicity were complex, MRCGP AKT showing similar correlations with Part1/Part2/PACES in White and BME candidates, but CSA showing stronger correlations with Part1/Part2/PACES in BME candidates than in White candidates.CSA changed its scoring method during the study; multiple regression showed the newer CSA was better predicted by PACES than the previous CSA.ConclusionsHigh correlations between MRCGP and MRCP(UK) support the validity of each, suggesting they assess knowledge cognate to both assessments.Detailed analyses by candidate ethnicity show that although White candidates out-perform BME candidates, the differences are largely mirrored across the two examinations. Whilst the reason for the differential performance is unclear, the similarity of the effects in independent knowledge and clinical examinations suggests the differences are unlikely to result from specific features of either assessment and most likely represent true differences in ability.
Unless examiners are carefully selected, trained, and monitored, examinations may become haphazard. This is perhaps most true of oral or viva voce ("viva") examinations, which can generate marks unrelated to competence. To help other bodies to short circuit some years of experiment in connection with the oral component of the Royal College of General Practitioners' membership examination (MRCGP), this paper describes the selection, training, guidance, and monitoring arrangements that have been developed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.