2016
DOI: 10.1007/978-3-319-41546-8_7
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison Between a Deep Convolutional Neural Network and Radiologists for Classifying Regions of Interest in Mammography

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
24
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 32 publications
(24 citation statements)
references
References 8 publications
0
24
0
Order By: Relevance
“…Solitary cyst and malignant mass patches from a diagnostic dataset are subsequently presented to the network and features are extracted from the hidden layers and used for the diagnosis task. The network employed is similar to the one in Ref …”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Solitary cyst and malignant mass patches from a diagnostic dataset are subsequently presented to the network and features are extracted from the hidden layers and used for the diagnosis task. The network employed is similar to the one in Ref …”
Section: Methodsmentioning
confidence: 99%
“…The CNN was implemented in Theano . We used a VGG‐like network architecture that was similar to the one employed in Kooi et al . with two additional convolutional layers.…”
Section: Methodsmentioning
confidence: 99%
“…[18][19][20][21][22] Advances in GPUs, availability of large labeled data sets, and corresponding novel optimization methods have led to the development of CNNs with deeper architecture. The DCNNs have recently shown success in various medical image analysis tasks such as segmentation, detection, and classification in mammography, 23 urinary bladder, 24 thoracic-abdominal lymph nodes and interstitial lung disease, 11,25 and pulmonary perifissural nodules. 26 Because of the local connectivity, shared weights, and local pooling properties of DCNNs, feature learning, i.e., feature extraction and selection, is inherently embedded into the training process.…”
Section: Introductionmentioning
confidence: 99%
“…is will minimize the variance between the training and validation sets and any future testing sets. Data augmentation has been used in many studies along with DL and CNN such as [192][193][194][195][196].…”
mentioning
confidence: 99%