Although new consultants felt well prepared for medical tasks, the scores of more generic tasks indicate that the alignment between the different phases of the medical education continuum and independent practice needs improvement.
IntroductionMany training programmes in postgraduate medical education (PGME) have introduced competency frameworks, but the effects of this change on preparedness for practice are unknown. Therefore, we explored how elements of competency-based programmes in PGME (educational innovations, attention to competencies and learning environment) were related to perceived preparedness for practice among new consultants.MethodsA questionnaire was distributed among 330 new consultants. Respondents rated how well their PGME training programme prepared them for practice, the extent to which educational innovations (portfolio, Mini-CEX) were implemented, and how much attention was paid to CanMEDS competencies during feedback and coaching, and they answered questions on the learning environment and general self-efficacy. Multiple regression and mediation analyses were used to analyze data.ResultsThe response rate was 43 % (143/330). Controlling for self-efficacy and gender, the learning environment was the strongest predictor of preparedness for practice (B = 0.42, p < 0.001), followed by attention to competencies (B = 0.29, p < 0.01). Educational innovations were not directly related to preparedness for practice. The overall model explained 52 % of the variance in preparedness for practice. Attention to competencies mediated the relationship between educational innovations and preparedness for practice. This mediation became stronger at higher learning environment values.ConclusionsThe learning environment plays a key role in determining the degree to which competency-based PGME prepares trainees for independent practice.
Background: Internationally, postgraduate medical education (PGME) has shifted to competency-based training. To evaluate the effects of this shift on the outcomes of PGME appropriate instruments are needed. Aim: To provide an inventory of tasks specialists perform in practice, which can be used as an instrument to evaluate the outcomes of PGME across disciplines. Methods: Following methodology from job analysis in human resource management, we used document analyses, observations, interviews and questionnaires. Two thousand seven hundred and twenty eight specialists were then asked to indicate how frequently they performed each task in the inventory, and to suggest additional tasks. Face and content validity was evaluated using interviews and the questionnaire. Tasks with similar content were combined in a total of 12 clusters. Internal consistency was evaluated by calculating Cronbach's alpha. Construct validity was determined by examining predefined differences in task performance between medical, surgical and supportive disciplines. Results: Seven hundred and six specialists (36%) returned the questionnaire. The resulting inventory of 91 tasks showed adequate face and content validity. Internal consistency of clusters of tasks was adequate. Significant differences in task performance between medical, surgical and supportive disciplines indicated construct validity. Conclusion: We established a comprehensive, generic and valid inventory of tasks of specialists which appears to be applicable across medical, surgical and supportive disciplines.
Students’ perceptions of teaching quality are vital for quality assurance purposes. An increasingly used, department-independent instrument is the (Cleveland) clinical teaching effectiveness instrument (CTEI). Although the CTEI was developed carefully and its validity and reliability confirmed, we noted an opportunity for improvement given an intermingling in its rating scales: the labels of the answering scales refer to both frequency and quality of teaching behaviours. Our aim was to investigate whether frequency and quality scores on the CTEI items differed. A sample of 112 residents anonymously completed the CTEI with separate 5-point rating scales for frequency and quality. Differences between frequency and quality scores were analyzed using paired t tests. Quality was, on average, rated higher than frequency, with significant differences for ten out of 15 items. The mean scores differed significantly in favour of quality. As the effect size was large, the difference in mean scores was substantial. Since quality was generally rated higher than frequency, the authors recommend distinguishing frequency from quality. This distinction helps to obtain unambiguous outcomes, which may be conducive to providing concrete and accurate feedback, improving faculty development and making fair decisions concerning promotion, tenure or salary.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.