Cognitive aids (CA), including emergency manuals and checklists, are tools designed to assist users in prioritizing and performing complex tasks during time sensitive, high stress situations (Marshall in Anesth Analgesia 117(5):1162-1171, 2013; Marshall and Mehra in Anaesthesia 69(7):669-677, 2014). The society for pediatric anesthesia (SPA) has developed a series of emergency checklists tailored for use by pediatric perioperative teams that cover a wide range of intraoperative critical events (Shaffner et al. in Anesth Analgesia 117(4):960-979, 2013). In this study, we evaluated user preferences for a CA (SPA checklist) using two different presentation formats, paper and electronic, during management of simulated critical events. Anesthesia trainees managed the simulated critical events under one of three randomized conditions: (1) memory alone, (2) with a paper version of the CA, (3) with an electronic version of the CA. Following participation in the simulated critical events, participants were asked to complete a survey regarding their experience using the different versions of the CA. The percentage of favorable responses for each format of the CA was compared using a mixed effects proportional odds model. There were 143 simulated events managed by 89 anesthesia trainees. Approximately one out of three trainees (electronic 29 %, paper 30 %) assigned to use the CA chose not to use it and completed the scenario from memory alone. The survey was completed by 68 % of participants, 58 % of trainees preferred the paper version and 35 % preferred the electronic version. All survey responses that reached statistical significance favored the paper version. In this study, anesthesia trainees had a favorable opinion of the content and perceived clinical relevance of both versions of the CA. In both quantitative and qualitative analysis, the paper version of the CA was preferred over the electronic version by participants. Despite overall favorable responses to the CA, a sizeable number of participants chose not to use either version the CA during the crisis.
The format (paper or electronic) of the CA did not affect the impact of the CA on clinician performance in this study. Clinician compliance with the use of the CA was unaffected by format, suggesting that other factors may determine whether clinicians choose to use a CA or not. Time to use of the CA did not affect clinical performance, suggesting that it may not be when CAs are used but how they are used that determines their impact. The current study highlights the importance of not just familiarizing clinicians with the content of CA but also training clinicians in when and how to use an emergency CA.
Background Pediatric perioperative cardiac arrests are rare events that require rapid, skilled and coordinated efforts to optimize outcomes. We developed an assessment tool for assessing clinician performance during perioperative critical events termed Anesthesia-centric Pediatric Advanced Life Support (A-PALS). Here, we describe the development and evaluation of the A-PALS scoring instrument. Methods A group of raters scored videos of a perioperative team managing simulated events representing a range of scenarios and competency. We assessed agreement with the reference standard grading, as well as interrater and intrarater reliability. Results Overall, raters agreed with the reference standard 86.2% of the time. Rater scores concerning scenarios that depicted highly competent performance correlated better with the reference standard than scores from scenarios that depicted low clinical competence (P < 0.0001). Agreement with the reference standard was significantly (P < 0.0001) associated with scenario type, item category, level of competency displayed in the scenario, correct versus incorrect actions and whether the action was performed versus not performed. Kappa values were significantly (P < 0.0001) higher for highly competent performances as compared to lesser competent performances (good: mean = 0.83 [standard deviation = 0.07] versus poor: mean = 0.61 [standard deviation = 0.14]). The intraclass correlation coefficient (interrater reliability) was 0.97 for the raters’ composite scores on correct actions and 0.98 for their composite scores on incorrect actions. Conclusions This study provides evidence for the validity of the A-PALS scoring instrument and demonstrates that the scoring instrument can provide reliable scores, although clinician performance affects reliability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.