A multidisciplinary Tier 3 weight management service in primary care recruited patients with a body mass index ≥40 kg·m−2, or 30 kg·m−2 with obesity-related co-morbidity to a 1-year programme. A cohort of 230 participants was recruited and evaluated using the National Obesity Observatory Standard Evaluation Framework. The primary outcome was weight loss of at least 5% of baseline weight at 12 months. Diet was assessed using the two-item food frequency questionnaire, activity using the General Practice Physical Activity questionnaire and quality of life using the EuroQol-5D-5L questionnaire. A focus group explored the participants' experiences. Baseline mean weight was 124.4 kg and mean body mass index was 44.1 kg·m−2. A total of 102 participants achieved 5% weight loss at 12 months. The mean weight loss was 10.2 kg among the 117 participants who completed the 12-month programme. Baseline observation carried forward analysis gave a mean weight loss of 5.9 kg at 12 months. Fruit and vegetable intake, activity level and quality of life all improved. The dropout rate was 14.3% at 6 months and 45.1% at 1 year. Focus group participants described high levels of satisfaction. It was possible to deliver a Tier 3 weight management service for obese patients with complex co-morbidity in a primary care setting with a full multidisciplinary team, which obtained good health outcomes compared with existing services.
Background: The importance of teaching the skills and practice of evidence-based medicine (EBM) for medical professionals has steadily grown in recent years. Alongside this growth is a need to evaluate the effectiveness of EBM curriculum as assessed by competency in the five 'A's': asking, acquiring, appraising, applying and assessing (impact and performance). EBM educators in medical education will benefit from a compendium of existing assessment tools for assessing EBM competencies in their settings. The purpose of this review is to provide a systematic review and taxonomy of validated tools that evaluate EBM teaching in medical education. Methods: We searched MEDLINE, EMBASE, Cochrane library, Educational Resources Information Centre (ERIC), Best Evidence Medical Education (BEME) databases and references of retrieved articles published between January 2005 and March 2019. We have presented the identified tools along with their psychometric properties including validity, reliability and relevance to the five domains of EBM practice and dimensions of EBM learning. We also assessed the quality of the tools to identify high quality tools as those supported by established interrater reliability (if applicable), objective (non-self-reported) outcome measures and achieved ≥ 3 types of established validity evidence. We have reported our study in accordance with the PRISMA guidelines. Results: We identified 1719 potentially relevant articles of which 63 full text articles were assessed for eligibility against inclusion and exclusion criteria. Twelve articles each with a unique and newly identified tool were included in the final analysis. Of the twelve tools, all of them assessed the third step of EBM practice (appraise) and four assessed just that one step. None of the twelve tools assessed the last step of EBM practice (assess). Of the seven domains of EBM learning, ten tools assessed knowledge gain, nine assessed skills and-one assessed attitude. None addressed reaction to EBM teaching, self-efficacy, behaviours or patient benefit. Of the twelve tools identified, six were high quality. We have also provided a taxonomy of tools using the CREATE framework, for EBM teachers in medical education. Conclusions: Six tools of reasonable validity are available for evaluating most steps of EBM and some domains of EBM learning. Further development and validation of tools that evaluate all the steps in EBM and all educational outcome domains are needed. Systematic review registration: PROSPERO CRD42018116203.
BackgroundNinety-one per cent of primary care trusts were using some form of referral management in 2009, although evidence for its effectiveness is limited. AimTo assess the impact of three referralmanagement centres (RMCs) and two internal peer-review approaches to referral management on hospital outpatient attendance rates. Design and settingA retrospective time-series analysis of 376 000 outpatient attendances over 3 years from 85 practices divided into five groups, with 714 000 registered patients in one English primary care trust. MethodThe age-standardised GP-referred first outpatient monthly attendance rate was calculated for each group from April 2009 to March 2012. This was divided by the equivalent monthly England rate, to derive a rate ratio. Linear regression tested for association between the introduction of referral management and change in the outpatient attendance rate and rate ratio. Annual group budgets for referral management were obtained. ResultsReferral management was not associated with a reduction in the outpatient attendance rate in any group. There was a statistically significant increase in attendance rate in one group (a RMC), which had an increase of 1.05 attendances per 1000 persons per month (95% confidence interval = 0.46 to 1.64; attendance rate ratio increase of 0.07) after adjustment for autocorrelation. Mean annual budgets ranged from £0.55 to £6.23 per registered patient in 2011/2012. RMCs were more expensive (mean annual budget £5.18 per registered patient) than internal peer-review approaches (mean annual budget £0.97 per registered patient). ConclusionReferral-management schemes did not reduce outpatient attendance rates. RMCs were more expensive than internal peer review.
This article was migrated. The article was marked as recommended. The COVID-19 pandemic has created a challenge for all medical educators. There is a clear need to train the next generation of doctors whilst ensuring that patient safety is preserved. The OSCE has long been used as the gold standard for assessing clinical competency in undergraduates ( Khan et al ., 2013a ). However, social distancing rules have meant that we have had to reconsider our traditional assessment methods. We held a remote eight-station summative OSCE (rOSCE) for three final year resit students using Microsoft Teams. Apart from clinical examinations and practical procedures which are assessed elsewhere in our programme, the content was similar to our standard OSCE. Staff and student training ensured familiarity with the assessment modality. The rOSCE was found to be a feasible tool with high face validity. The rOSCE is a remote assessment tool that can offer an alternative to the traditional face to face OSCEs for use in high stakes examinations. Although further research is needed, we believe that the rOSCE is scalable to larger cohorts of students and is adaptable to the needs of most undergraduate clinical competency assessments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.