2019
DOI: 10.21037/atm.2018.08.11
|View full text |Cite
|
Sign up to set email alerts
|

Detecting insertion, substitution, and deletion errors in radiology reports using neural sequence-to-sequence models

Abstract: Background: Errors in grammar, spelling, and usage in radiology reports are common. To automatically detect inappropriate insertions, deletions, and substitutions of words in radiology reports, we proposed using a neural sequence-to-sequence (seq2seq) model. Methods: Head CT and chest radiograph reports from Mount Sinai Hospital (MSH) (n=61,722 and 818,978, respectively), Mount Sinai Queens (MSQ) (n=30,145 and 194,309, respectively) and MIMIC-III (n=32,259 and 54,685) were converted into sentences. Insertions,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…This category is for publications that have a primary technical aim that is not focused on radiology report outcome, e.g. detecting negation in reports, spelling correction [106], fact checking [107,108] methods for sample selection, crowd source annotation [109]. This category did not occur in Pons' earlier review.…”
Section: Technical Nlpmentioning
confidence: 99%
“…This category is for publications that have a primary technical aim that is not focused on radiology report outcome, e.g. detecting negation in reports, spelling correction [106], fact checking [107,108] methods for sample selection, crowd source annotation [109]. This category did not occur in Pons' earlier review.…”
Section: Technical Nlpmentioning
confidence: 99%
“…This category is for publications that have a primary technical aim that is not focused on radiology report outcome, e.g. detecting negation in reports, spelling correction [Zech et al, 2019], fact checking , Steinkamp et al, 2019 methods for sample selection, crowd source annotation [Cocos et al, 2017]. This category did not occur in Pons' earlier review.…”
Section: Technical Nlpmentioning
confidence: 99%
“…These cases are often called nonwords [83] and also have typical characteristics [33]. The high number of misspells in radiologic reports is also already established internationally [168].…”
Section: Automatic Correction Of the Textmentioning
confidence: 99%