2019
DOI: 10.1016/j.artmed.2018.11.004
|View full text |Cite
|
Sign up to set email alerts
|

Comparative effectiveness of convolutional neural network (CNN) and recurrent neural network (RNN) architectures for radiology text report classification

Abstract: This paper explores cutting-edge deep learning methods for information extraction from medical imaging free text reports at a multi-institutional scale and compares them to the state-of-the-art domain-specific rule-based system -PEFinder and traditional machine learning methods -SVM and Adaboost. We proposed two distinct deep learning models -(i) CNN Word -Glove, and (ii) Domain phrase attention-based hierarchical recurrent neural network (DPA-HNN), for synthesizing information on pulmonary emboli (PE) from ov… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
109
0
3

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 195 publications
(114 citation statements)
references
References 44 publications
2
109
0
3
Order By: Relevance
“…DL has shown remarkable results in extracting low-and high-level abstractions from raw text data with semantic and syntactic capabilities. This ability is often accompanied by excellent performance across translational science applications (25,32) and as highlighted below.…”
Section: Word Embeddingmentioning
confidence: 99%
“…DL has shown remarkable results in extracting low-and high-level abstractions from raw text data with semantic and syntactic capabilities. This ability is often accompanied by excellent performance across translational science applications (25,32) and as highlighted below.…”
Section: Word Embeddingmentioning
confidence: 99%
“…DL is quickly emerging in the literature as a viable alternative method to traditional ML for the classification of clinical narratives, even in situations where limited labeled data is available [37]. The technique can help in the recognition of a limited number of categories from biomedical text [39,40]; identify psychiatric conditions of patients based on short clinical histories [41]; and accurately classify whether or not radiology reports indicate pulmonary embolism [42,43] whilst outperforming baseline methods (e.g. RFs or DTs).…”
Section: Background and Significancementioning
confidence: 99%
“…With respect to automated text classification, in this work, we compared the approaches from the two main paradigms: (1) symbolic text classification, in which texts are represented with sparse vectors of TF-IDF weights, used as input features for traditional machine learning algorithms, such as Logistic Regression (LR) or Support Vector Machine (SVM); and (2) a more recent semantic text classification paradigm, in which dense semantic representations of words-word embeddings-are introduced as input to a neural architecture. Different deep learning architectures have been tried in a number of medical text classification tasks [25][26][27], including automated classification of radiology reports [6,28,29]. While recurrent [29,30] and attention-based neural networks [27,31] may present a viable solution, convolutional neural networks (CNN) seem to generally offer an edge in classification performance as well as faster training times [6,29].…”
Section: Introductionmentioning
confidence: 99%
“…Different deep learning architectures have been tried in a number of medical text classification tasks [25][26][27], including automated classification of radiology reports [6,28,29]. While recurrent [29,30] and attention-based neural networks [27,31] may present a viable solution, convolutional neural networks (CNN) seem to generally offer an edge in classification performance as well as faster training times [6,29]. Furthermore, due to their efficiency and being less data-hungry than, e.g., recurrent networks, CNNs have profiled themselves as a go-to text classification architecture in general-purpose natural language processing [32][33][34].…”
Section: Introductionmentioning
confidence: 99%