2020 International SoC Design Conference (ISOCC) 2020
DOI: 10.1109/isocc50952.2020.9333038
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Channel Input Deep Convolutional Neural Network for Mammogram Diagnosis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
1

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 5 publications
0
2
1
Order By: Relevance
“…From the results presented in Table 2, we can see that the proposed model gives a superior performance considering the multiple evaluating metrics. The recognition accuracy of our model is 92.05%, which is much higher than the reported results of Ertosun et al 24 and Suzuki et al 25 The sensitivity of our model is 91.25%, which 8.97% higher than Swiderski et al, 26 8.43% higher than Kurek et al, 27 and 7.85% higher than Bae et al 28 Mostly, from the definition of sensitivity in Section 2.2.5, the higher sensitivity indicates that the more mammogram ROIs containing mass are recognized correctly, which is more consistent with the goal in clinical treatments. All of this proves that the proposed model offers more reliable performance for breast mass recognition in mammograms.…”
Section: Recognition Resultscontrasting
confidence: 62%
See 2 more Smart Citations
“…From the results presented in Table 2, we can see that the proposed model gives a superior performance considering the multiple evaluating metrics. The recognition accuracy of our model is 92.05%, which is much higher than the reported results of Ertosun et al 24 and Suzuki et al 25 The sensitivity of our model is 91.25%, which 8.97% higher than Swiderski et al, 26 8.43% higher than Kurek et al, 27 and 7.85% higher than Bae et al 28 Mostly, from the definition of sensitivity in Section 2.2.5, the higher sensitivity indicates that the more mammogram ROIs containing mass are recognized correctly, which is more consistent with the goal in clinical treatments. All of this proves that the proposed model offers more reliable performance for breast mass recognition in mammograms.…”
Section: Recognition Resultscontrasting
confidence: 62%
“…Zhao et al 12 used four view mammograms from DDSM to train the improved ResNet‐50 with a cross‐view attention module, their AUC of mammogram binary classification of distinguishing malignant, and no‐malignant on DDSM was 0.862. Bae et al 28 used the GoogLeNet Inception module for the network and proposed four‐channel mammograms as input (left_cc, left_mlo, right_cc, and right_mlo) to fed the network, and achieved 0.95 AUC for the two‐class task (normal vs. cancer), with 82.40% sensitivity and 89.40% specificity, respectively. These studies focused on using popular pretrained CNN architectures; however, breast mass recognition is still a challenging task.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation