The results suggest that computer-assisted learning methods will be of greater help to students who do not find the traditional methods effective. Explorations of the factors behind this are a matter for future research.
The use of checklists is recommended for the assessment of competency in central venous catheterization (CVC) insertion. To explore the use of a global rating scale in the assessment of CVC skills, this study seeks to compare its use with two checklists, within the context of a formative examination using simulation. Video-recorded performances of CVC insertion by 34 first-year medical residents were reviewed by two independent, trained evaluators. Each evaluator used three assessment tools: a ten-item checklist, a 21-item checklist, and a nine-item global rating scale. Exploratory principal component analysis of the global rating scale revealed two factors, accounting for 84.1% of the variance: technical ability and safety. The two checklist scores correlated positively with the weighted factor score on technical ability (0.49 [95% CI 0.17-0.71] for the 10-item checklist; 0.43 [95% CI 0.10-0.67] for the 21-item checklist) and negatively with the weighted factor score on safety (-0.17 [95% CI -0.48-0.18] for the 10-item checklist; -0.13 [95% CI -0.45-0.22] for the 21-item checklist). A checklist score of <80% was strong indication of incompetence. However, a high checklist score did not preclude incompetence. Ratings using the global rating scale identified an additional 11 candidates (32%) who were deemed incompetent despite scoring >80% on both checklists. All these candidates committed serious errors. In conclusion, the practice of universal adoption of checklists as the preferred method of assessment of procedural skills should be questioned. The inclusion of global rating scales should be considered.
Simulation is rapidly penetrating the terrain of health care education and has gained growing acceptance as an educational method and patient safety tool. Despite this, the state of simulation in health care education has not yet been evaluated on a global scale. In this project, we studied the global status of simulation in health care education by determining the degree of financial support, infrastructure, manpower, information technology capabilities, engagement of groups of learners, and research and scholarly activities, as well as the barriers, strengths, opportunities for growth, and other aspects of simulation in health care education. We utilized a two-stage process, including an online survey and a site visit that included interviews and debriefings. Forty-two simulation centers worldwide participated in this study, the results of which show that despite enormous interest and enthusiasm in the health care community, use of simulation in health care education is limited to specific areas and is not a budgeted item in many institutions. Absence of a sustainable business model, as well as sufficient financial support in terms of budget, infrastructure, manpower, research, and scholarly activities, slows down the movement of simulation. Specific recommendations are made based on current findings to support simulation in the next developmental stages.
Objective: To evaluate the feasibility, reliability and acceptability of the mini clinical evaluation exercise (mini‐CEX) for performance assessment among international medical graduates (IMGs).
Design, setting and participants: Observational study of 209 patient encounters involving 28 IMGs and 35 examiners at three metropolitan teaching hospitals in New South Wales, Victoria and Queensland, September–December 2006.
Main outcome measures: The reliability of the mini‐CEX was estimated using generalisability (G) analysis, and its acceptability was evaluated by a written survey of the examiners and IMGs.
Results: The G coefficient for eight encounters was 0.88, suggesting that the reliability of the mini‐CEX was 0.90 for 10 encounters. Almost half of the IMGs (7/16) and most examiners (14/18) were satisfied with the mini‐CEX as a learning tool. Most of the IMGs and examiners enjoyed the immediate feedback, which is a strong component of the tool.
Conclusion: The mini‐CEX is a reliable tool for performance assessment of IMGs, and is acceptable to and well received by both learners and supervisors.
PBL was as effective in knowledge uptake and retention as lecture-based continuing medical education (CME) programs. Further study is warranted to investigate whether the assessment of higher educational value or an increase in response rate to delayed testing is replicable in other RCTs addressing common confounders and if these factors influence future CME participation, changes in physician clinical behavior, or patient health outcomes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.