Abstract. Objective: To describe interobserver variability among emergency medicine (EM) faculty when using global assessment (GA) rating scales and performance-based criterion (PBC) checklists to evaluate EM residents' clinical skills during standardized patient (SP) encounters. Methods: Six EM residents were videotaped during encounters with SPs and subsequently evaluated by 38 EM faculty at four EM residency sites. There were two encounters in which a single SP presented with headache, two in which a second SP presented with chest pain, and two in which a third SP presented with abdominal pain, resulting in two parallel sets of three. Faculty used GA rating scales to evaluate history taking, physical examination, and interpersonal skills for the initial set of three cases. Each encounter in the second set was evaluated with complaint-specific PBC checklists developed by SAEM's National Consensus Group on Clinical Skills Task Force. Results: Standard deviations, computed for each score distribution, were generally similar across evaluation methods. None of the distributions deviated significantly from that of a Gaussian distribution, as indicated by the Kolmogorov-Smirnov goodness-of-fit test. On PBC checklists, 80% agreement among faculty observers was found for 74% of chest pain, 45% of headache, and 30% of abdominal pain items. Conclusions: When EM faculty evaluate clinical performance of EM residents during videotaped SP encounters, interobserver variabilities are similar, whether a PBC checklist or a GA rating scale is used.
Objective: To test the overall reliability of a performance-based clinical skill assessment for entering emergency medicine (EM) residents. Also, to investigate the reliability of separate reporting of diagnostic and management scores for a standardized patient case, subjective scoring of patient notes, and interstation exercise scores. Methods: Thirty-four first-year EM residents were tested using a 10-station standardized patient (SP) examination. Following each 10-minute encounter, the residents completed a patient note that included differential diagnosis and management. The residents also were asked to read an ECG or chest x-ray (CXR) associated with each case. History, physical examination, and interpersonal skills were scored by the SPs. The patient note, CXR, and ECG readings were scored by faculty emergency physicians. Intercase reliability was determined for the residents. amination in the 1970s as part of the certification process, aimed at better assessing clinical competence. In addition, the ABEM required residency directors to attest to the clinical skills of applicants to the Board. In response to that requirement, we are now developing a comprehensive performance-based assessment that will provide baseline data about entering emergency medicine (EM) residents' global clinical competence and competence in related skills, with specific information about their strengths and weaknesses.Encounters with standardized patients (SPs)-nonphysicians trained to portray a specific history and physical examination-along with related interstation exercises have been used in undergraduate and graduate medical education to assess clinical competence.' Traditionally, history, physical examination, interpersonal skills, and the patient note written after the SP encounter have been the major components separately reported in performance-based assessment programs due to their relatively acceptable reliabilit~.~ The reliability of additional components is still being investigated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.