Background: Improving the speed of systematic review (SR) development is key to supporting evidence-based medicine. Machine learning tools which semi-automate citation screening might improve efficiency. Few studies have assessed use of screening prioritization functionality or compared two tools head to head. In this project, we compared performance of two machine-learning tools for potential use in citation screening. Methods: Using 9 evidence reports previously completed by the ECRI Institute Evidence-based Practice Center team, we compared performance of Abstrackr and EPPI-Reviewer, two off-the-shelf citations screening tools, for identifying relevant citations. Screening prioritization functionality was tested for 3 large reports and 6 small reports on a range of clinical topics. Large report topics were imaging for pancreatic cancer, indoor allergen reduction, and inguinal hernia repair. We trained Abstrackr and EPPI-Reviewer and screened all citations in 10% increments. In Task 1, we inputted whether an abstract was ordered for full-text screening; in Task 2, we inputted whether an abstract was included in the final report. For both tasks, screening continued until all studies ordered and included for the actual reports were identified. We assessed potential reductions in hypothetical screening burden (proportion of citations screened to identify all included studies) offered by each tool for all 9 reports. Results: For the 3 large reports, both EPPI-Reviewer and Abstrackr performed well with potential reductions in screening burden of 4 to 49% (Abstrackr) and 9 to 60% (EPPI-Reviewer). Both tools had markedly poorer performance for 1 large report (inguinal hernia), possibly due to its heterogeneous key questions. Based on McNemar's test for paired proportions in the 3 large reports, EPPI-Reviewer outperformed Abstrackr for identifying articles ordered for full-text review, but Abstrackr performed better in 2 of 3 reports for identifying articles included in the final report. For small reports, both tools provided benefits but EPPI-Reviewer generally outperformed Abstrackr in both tasks, although these results were often not statistically significant. Conclusions: Abstrackr and EPPI-Reviewer performed well, but prioritization accuracy varied greatly across reports. Our work suggests screening prioritization functionality is a promising modality offering efficiency gains without giving up human involvement in the screening process.
Background: Pediatric lead exposure in the United States (U.S.) remains a preventable public health crisis. Shareable electronic clinical decision support (CDS) could improve lead screening and management. However, discrepancies between federal, state and local recommendations could present significant challenges for implementation. Methods: We identified publically available guidance on lead screening and management. We extracted definitions for elevated lead and recommendations for screening, follow-up, reporting, and management. We compared thresholds and level of obligation for management actions. Finally, we assessed the feasibility of development of shareable CDS. Results: We identified 54 guidance sources. States offered different definitions of elevated lead, and recommendations for screening, reporting, follow-up and management. Only 37 of 48 states providing guidance used the Center for Disease Control (CDC) definition for elevated lead. There were 17 distinct management actions. Guidance sources indicated an average of 5.5 management actions, but offered different criteria and levels of obligation for these actions. Despite differences, the recommendations were well-structured, actionable, and encodable, indicating shareable CDS is feasible. Conclusion: Current variability across guidance poses challenges for clinicians. Developing shareable CDS is feasible and could improve pediatric lead screening and management. Shareable CDS would need to account for local variability in guidance.
Background and Significance Quality measurement can drive improvement in clinical care and allow for easy reporting of quality care by clinicians, but creating quality measures is a time-consuming and costly process. ECRI (formerly Emergency Care Research Institute) has pioneered a process to support systematic translation of clinical practice guidelines into electronic quality measures using a transparent and reproducible pathway. This process could be used to augment or support the development of electronic quality measures of the American Academy of Otolaryngology–Head and Neck Surgery Foundation (AAO-HNSF) and others as the Centers for Medicare and Medicaid Services transitions from the Merit-Based Incentive Payment System (MIPS) to the MIPS Value Pathways for quality reporting. Methods We used a transparent and reproducible process to create electronic quality measures based on recommendations from 2 AAO-HNSF clinical practice guidelines (cerumen impaction and allergic rhinitis). Steps of this process include source material review, electronic content extraction, logic development, implementation barrier analysis, content encoding and structuring, and measure formalization. Proposed measures then go through the standard publication process for AAO-HNSF measures. Results The 2 guidelines contained 29 recommendation statements, of which 7 were translated into electronic quality measures and published. Intermediate products of the guideline conversion process facilitated development and were retained to support review, updating, and transparency. Of the 7 initially published quality measures, 6 were approved as 2018 MIPS measures, and 2 continued to demonstrate a gap in care after a year of data collection. Conclusion Developing high-quality, registry-enabled measures from guidelines via a rigorous reproducible process is feasible. The streamlined process was effective in producing quality measures for publication in a timely fashion. Efforts to better identify gaps in care and more quickly recognize recommendations that would not translate well into quality measures could further streamline this process.
Objective The Patient-Centered Outcomes Research Institute (PCORI) horizon scanning system is an early warning system for healthcare interventions in development that could disrupt standard care. We report preliminary findings from the patient engagement process. Methods The system involves broadly scanning many resources to identify and monitor interventions up to 3 years before anticipated entry into U.S. health care. Topic profiles are written on included interventions with late-phase trial data and circulated with a structured review form for stakeholder comment to determine disruption potential. Stakeholders include patients and caregivers recruited from credible community sources. They view an orientation video, comment on topic profiles, and take a survey about their experience. Results As of March 2020, 312 monitored topics (some of which were archived) were derived from 3,500 information leads; 121 met the criteria for topic profile development and stakeholder comment. We invited fifty-four patients and caregivers to participate; thirty-nine reviewed at least one report. Their perspectives informed analyst nominations for fourteen topics in two 2019 High Potential Disruption Reports. Thirty-four patient stakeholders completed the user-experience survey. Most agreed (68 percent) or somewhat agreed (26 percent) that they were confident they could provide useful comments. Ninety-four percent would recommend others to participate. Conclusions The system has successfully engaged patients and caregivers, who contributed unique and important perspectives that informed the selection of topics deemed to have high potential to disrupt clinical care. Most participants would recommend others to participate in this process. More research is needed to inform optimal patient and caregiver stakeholder recruitment and engagement methods and reduce barriers to participation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.