2022
DOI: 10.1101/2022.02.24.22268947
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A real-world evaluation of the implementation of NLP technology in abstract screening of a systematic review

Abstract: The laborious and time-consuming nature of systematic reviews hinders the dissemination of up-to-date evidence synthesis. Well-performing natural language processing (NLP) for systematic reviews have been developed, showing promise to improve efficiency. However, the feasibility and value of these tools have not been comprehensively demonstrated in a real-world review. We developed an NLP-assisted abstract screening tool that provides text inclusion recommendations, keyword highlights, and visual context cues.… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
1

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 25 publications
(27 reference statements)
0
5
1
Order By: Relevance
“…An alternative inquiry utilizing another natural language processing tool, Covidence, reported sensitivity and PPVs of 0.90 and 0.92, respectively. 18 Here, we demonstrated that the sensitivity of the automated citation screening in the primary analysis varied from 0.2 to 0.75, which is lower than that found in the previous study that investigated the accuracy of ASReview. 4 The difference can be explained by the setting of the standard reference: whereas the previous report set the final list of included studies as the standard reference, we used the results of the conventional method after the first screening as the standard reference in the primary analysis.…”
Section: Discussioncontrasting
confidence: 61%
See 1 more Smart Citation
“…An alternative inquiry utilizing another natural language processing tool, Covidence, reported sensitivity and PPVs of 0.90 and 0.92, respectively. 18 Here, we demonstrated that the sensitivity of the automated citation screening in the primary analysis varied from 0.2 to 0.75, which is lower than that found in the previous study that investigated the accuracy of ASReview. 4 The difference can be explained by the setting of the standard reference: whereas the previous report set the final list of included studies as the standard reference, we used the results of the conventional method after the first screening as the standard reference in the primary analysis.…”
Section: Discussioncontrasting
confidence: 61%
“…[15][16][17] Other publications employing similar software have indicated the feasibility of replacing the conventional method with an efficient automated approach for screening studies in a systematic review. 18,19 Among several software packages that use natural language processing tools, the reliance on prior training data input varies. 6,9 For instance, DistillerAI requires a training dataset comprising 40 excluded and 10 included references for active machine learning.…”
Section: Discussionmentioning
confidence: 99%
“…We calibrated our reviewers according to screening proficiency by having prospective reviewers first screen a ‘calibration set’ of abstracts. This set was sourced from a prior study by the SeroTracker group, 54 which assessed the performance of dual human reviewer workflows. Notably, the SeroTracker researchers were experienced SR screeners, having contributed to the SeroTracker living SR for over a year, and represent a high-performing baseline for screening accuracy.…”
Section: Methodsmentioning
confidence: 99%
“…For full-texts, we counted the total number of ‘Included’ and ‘Excluded’ articles from Covidence. Based on previous studies, the time required to screen a single abstract ranges from 20-461 seconds, 5,54,55 and 4.3-20 minutes for a single full-text article. 55,56 We aligned with Perlman-Arrow et al 54 due to our use of the same ST dataset and set the screening time at 30 seconds per abstract and 10 minutes per full-text.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation