Objective: The purpose of this study is to evaluate resident’s assessment of respiratory distress compared to attending’s assessment in the academic setting. Results: We evaluated resident’s assessment of respiratory distress in children through inter-rater reliability of a bronchiolitis severity assessment tool among pediatric attendings and pediatric residents. Inter-rater reliability (IRR) was assessed using a one-way random, average measures intra-class correlation (ICC) to evaluate the degree of consistency and magnitude of disagreement between inter-raters. Value of >0.6 was considered substantial for kappa and good internal consistency for ICC. Twenty patients were evaluated. Analysis showed fair agreement for the presence of retractions (K=0.31), auscultation (K=0.33), and total score (K=0.3). The RR (ICC=0.97), peripheral saturation (ICC=1.0), auscultation (ICC=0.77), and total score (ICC=0.84) were scored similarly across both raters, indicating excellent IRR. The identification of retractions had the least level of agreement.
Background: Resident milestones are objective instruments that assess the resident's growth, progression in knowledge, and clinical diagnostic reasoning; but they rely on the subjective appraisal of the supervising attending. Little is known about the use of standardized instruments that may complement the evaluation of resident diagnostic skills in the academic setting.Objectives: Evaluate a modified bronchiolitis severity assessment tool by appraising the inter-rater variability and reliability between pediatric attendings and pediatric residents.Methods: Cross-sectional study of children under 24 months of age who presented to a Community Hospital's Emergency Department with bronchiolitis between January-June 2014. A paired pediatric attending and resident evaluated each patient. Evaluation included age-based respiratory rate (RR), retractions, peripheral saturation, and auscultation. Cohen's kappa (K) measured inter-rater agreement. Inter-rater reliability (IRR) was assessed using a one-way random, average measures intra-class correlation (ICC) to evaluate the degree of consistency and magnitude of disagreement between inter-raters. Value of >0.6 was considered substantial for kappa and good internal consistency for ICC.Results: Twenty patients were evaluated. Analysis showed fair agreement for the presence of retractions (K=0.31), auscultation (K=0.33), and total score (K=0.3). The RR (ICC=0.97), SpO2 (ICC=1.0), auscultation (ICC=0.77), and total score (ICC=0.84) were scored similarly across both raters, indicating excellent IRR. Identification of retractions had the least agreement across all statistical analysis. Conclusion:The use of a standardized instrument, in conjunction with a trained resident-teaching staff, can help identify deficiencies in clinical competencies among residents and facilitate the learning process for the identification of pertinent clinical findings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.