2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015
DOI: 10.1109/cvpr.2015.7298712
|View full text |Cite
|
Sign up to set email alerts
|

Interleaved text/image Deep Mining on a large-scale radiology database

Abstract: Despite tremendous progress in computer vision, effective learning on very large-scale (> 100K patients) medical image databases has been vastly hindered. We present an interleaved text/image deep learning system to extract and mine the semantic interactions of radiology images and reports from a national research hospital's picture archiving and communication system. Instead of using full 3D medical volumes, we focus on a collection of representative~216K 2D key images/slices (selected by clinicians for diagn… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
99
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 91 publications
(100 citation statements)
references
References 27 publications
1
99
0
Order By: Relevance
“…They showed that semantic information increases classification accuracy for a variety of pathologies in Optical Coherence Tomography (OCT) images. Shin et al (2015) and Wang et al (2016e) mined semantic interactions between radiology reports and images from a large data set extracted from a PACS system. They employed latent Dirichlet allocation (LDA), a type of stochastic model that generates a distribution over a vocabulary of topics based on words in a document.…”
Section: Combining Image Data With Reportsmentioning
confidence: 99%
“…They showed that semantic information increases classification accuracy for a variety of pathologies in Optical Coherence Tomography (OCT) images. Shin et al (2015) and Wang et al (2016e) mined semantic interactions between radiology reports and images from a large data set extracted from a PACS system. They employed latent Dirichlet allocation (LDA), a type of stochastic model that generates a distribution over a vocabulary of topics based on words in a document.…”
Section: Combining Image Data With Reportsmentioning
confidence: 99%
“…26 One example for an approach based on deep learning and convolutional networks has already been published by Shin et al for X-Ray images. 27 Currently, medical retrieval systems try to become much more accessible on the web, typically being multi-modal in a way by supporting both textual and visual queries. Examples for web based search engines are NovaMedSearch 28 and GoldMiner.…”
Section: Related Workmentioning
confidence: 99%
“…[7] fine-tuned all layers of a pre-trained CNN for automatic classification of interstitial lung diseases. In [21], Shin et al . used fine-tuned pre-trained CNNs to automatically map medical images to document-level topics, document-level sub-topics, and sentence-level topics.…”
Section: Related Workmentioning
confidence: 99%
“…Several researchers have demonstrated the utility of fine-tuning CNNs for biomedical image analysis, but they only performed one-time fine-tuning, that is, simply fine-tuning a pre-trained CNN once with available training samples involving no active selection processes (e.g., [4, 19, 5, 2, 21, 7, 18, 24]). To our knowledge, our proposed method is among the first to integrate active learning into fine-tuning CNNs in a continuous fashion to make CNNs more amicable for biomedical image analysis with an aim to cut annotation cost dramatically.…”
Section: Introductionmentioning
confidence: 99%