The effectiveness of a continuing education programme in paediatric psychopharmacology designed for primary healthcare providers was objectively measured based on the assumption that training would lead to measurable changes in referral patterns and established clinical measures of referred patients. Using established, valid and reliable measures of clinical urgency embedded in to a regional healthcare system since 2002, the referrals to child and adolescent psychiatric services of physicians who participated in the training (n=99) were compared pretraining and post-training, and to non-participating/ untrained referring physicians (n=7753) making referrals over the same time period. Referrals were analysed for evidence of change based on frequencies and measures of clinical urgency. Participants of the training programme also completed standardised baseline and outcome selfevaluations. Congruent with participants self-reported evaluative reports of improved knowledge and practice, analysis of referral frequency and the clinical urgency of referrals to paediatric psychiatric services over the study period indicated that trained physicians made more appropriate referrals (clinically more severe) and reduced referrals to emergency services. Quantitative clinical differences as completed by intake clinicians blind to referrals from the study group designations were observed within the trained physician group pretraining and posttraining, and between the trained physician group and the unexposed physician group. The results illustrate a novel model for objectively measuring change among physicians based on training in paediatric mental health management.
IntroductionContinuing medical education (CME) is grounded in the belief that with increased physician knowledge comes better physician practice which leads to improved patient outcomes. A measurable change in patients' health as a function of CME is rare. Most measures of CME effect focus on physician self-reported CME content uptake.1-6 Yet, it is well documented that self-report, as a consequence of CME participation, invariably suffers from the Hawthorne effect, 7 wherein self-reported effects are systematically biased simply through participation. There is little, if any, research employing independent, objective measures of CME programme change effect on physicians' practice.
8The gaps between perceived, actual and ideal performance in healthcare are real. For example, a recent meta-analysis found that most studies fail to show a significant correlation between CME and health outcomes.5 While research has focused on improving physician practice through an examination of various styles of CME, demonstrating that smaller interactive workshops show greater improvements than didactic sessions, 5 it has, to a lesser degree, examined CME effects on physician practice in relation to patient outcomes. When quantifiable, outcomes that are the result of an action or activity 1 are more objective, rendering them adequate and unbiased assessments of CME. 6 In fact...