2020
DOI: 10.1186/s13643-020-01324-7
|View full text |Cite
|
Sign up to set email alerts
|

Machine learning for screening prioritization in systematic reviews: comparative performance of Abstrackr and EPPI-Reviewer

Abstract: Background: Improving the speed of systematic review (SR) development is key to supporting evidence-based medicine. Machine learning tools which semi-automate citation screening might improve efficiency. Few studies have assessed use of screening prioritization functionality or compared two tools head to head. In this project, we compared performance of two machine-learning tools for potential use in citation screening. Methods: Using 9 evidence reports previously completed by the ECRI Institute Evidence-based… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
58
2
7

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 60 publications
(67 citation statements)
references
References 15 publications
0
58
2
7
Order By: Relevance
“…Although some of the larger studies had high rates of title/abstract includes, due to the size of the dataset, the reduction in screening burden would still result in a large time and potentially lead to a subsequent cost savings. A recently published study evaluated the accuracy of screening prioritization of Abstrackr and EPPI-Reviewer [ 15 ]. Screening burden to identify all title/abstract includes for the de novo review was 85% or more for seven of the nine reviews for both Abstrackr (median: 93.8%, range: 71.1 to 99.0%) and EPPI-Reviewer (median: 91.3%, range: 39.9 to 97.9%).…”
Section: Discussionmentioning
confidence: 99%
“…Although some of the larger studies had high rates of title/abstract includes, due to the size of the dataset, the reduction in screening burden would still result in a large time and potentially lead to a subsequent cost savings. A recently published study evaluated the accuracy of screening prioritization of Abstrackr and EPPI-Reviewer [ 15 ]. Screening burden to identify all title/abstract includes for the de novo review was 85% or more for seven of the nine reviews for both Abstrackr (median: 93.8%, range: 71.1 to 99.0%) and EPPI-Reviewer (median: 91.3%, range: 39.9 to 97.9%).…”
Section: Discussionmentioning
confidence: 99%
“…We also reviewed the systematic review of research investigating text mining for study identification in systematic reviews published by O'Mara-Eves et al in 2015 [8]. We identified nine studies that were conducted or published since 2015 reporting on the use of ML for screening [10,15,16,18,[22][23][24][25][26]. As none of the studies shared our objectives, and trustworthiness remains a serious barrier to the update of semi-automated screening by review teams [13], we saw value in undertaking the present study.…”
Section: Rationalementioning
confidence: 99%
“…There is also little research documenting under which conditions ML-assisted screening approaches may be most successfully applied. To what extent ML-assisted methods could compromise the validity of systematic reviews' findings is vitally important, but few studies have reported on this outcome [17,18]. In this study, we aimed to address these knowledge gaps.…”
Section: Introductionmentioning
confidence: 99%
“…We also reviewed the systematic review of research investigating text mining for study identi cation in systematic reviews published by O'Mara-Eves et al in 2015 [8]. We identi ed nine studies that were conducted or published since 2015 reporting on the use of ML for screening [10,15,16,18,[22][23][24][25][26]. As none of the studies shared our objectives, and trustworthiness remains a serious barrier to the update of semiautomated screening by review teams [13], we saw value in undertaking the present study.…”
Section: Rationalementioning
confidence: 99%
“…There is also little research documenting under which conditions ML-assisted screening approaches may be most successfully applied. To what extent MLassisted methods could compromise the validity of systematic reviews' ndings is a vitally important, but few studies have reported on this outcome [17,18]. In this study, we aimed to address these knowledge gaps.…”
Section: Introductionmentioning
confidence: 99%