2020
DOI: 10.1097/md.0000000000021243
|View full text |Cite
|
Sign up to set email alerts
|

Fully automatic classification of breast MRI background parenchymal enhancement using a transfer learning approach

Abstract: Marked enhancement of the fibroglandular tissue on contrast-enhanced breast magnetic resonance imaging (MRI) may affect lesion detection and classification and is suggested to be associated with higher risk of developing breast cancer. The background parenchymal enhancement (BPE) is qualitatively classified according to the BI-RADS atlas into the categories “minimal,” “mild,” “moderate,” and “marked.” The purpose of this study was to train a deep convolutional neural network (dCNN) for standardized and automat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
16
2

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 20 publications
(20 citation statements)
references
References 29 publications
2
16
2
Order By: Relevance
“…In prior work, BPE quantification methods included an automated FGT segmentation step and a calculation of percent enhancement, 13,14 but these methods show only slight to fair agreement with radiologists ( κ = 0.20–0.36) and are more discordant at higher levels of BPE, precisely when they might be more clinically useful 2 . More recently, supervised deep learning models have been used to approximate radiology report BPE designations, with test set accuracies ranging from 0.70 to 0.93 17,18,20,21 . In contrast to prior work, we show that an AI model not just approximates the radiology report BPE classification but can surpass it.…”
Section: Discussioncontrasting
confidence: 65%
See 1 more Smart Citation
“…In prior work, BPE quantification methods included an automated FGT segmentation step and a calculation of percent enhancement, 13,14 but these methods show only slight to fair agreement with radiologists ( κ = 0.20–0.36) and are more discordant at higher levels of BPE, precisely when they might be more clinically useful 2 . More recently, supervised deep learning models have been used to approximate radiology report BPE designations, with test set accuracies ranging from 0.70 to 0.93 17,18,20,21 . In contrast to prior work, we show that an AI model not just approximates the radiology report BPE classification but can surpass it.…”
Section: Discussioncontrasting
confidence: 65%
“…Nam et al developed a similar model using qualitative FGT and BPE scoring by radiologists as the ground truth. Most recently, Borkowski et al developed a fully automated BPE classification modal using sequential “breast slice detection” and a “BPE classification” neural networks, achieving noninferior performance to experienced radiologists 20 . However, no BPE tool to date has been shown to surpass current standard‐of‐care radiologist BPE designations, which is the standard clinical practice today.…”
mentioning
confidence: 99%
“…Quantification of BPE volume and intensity on MRI may be feasible in the future, but we await publication of robust data on that topic before endorsing percentage recommendations'' [1]. Several different quantitative approaches have been investigated since the publication of the fifth edition of the BI-RADS Atlas, with numerous studies in the past five years [21][22][23][24][25][26][27][28][29][30][31][32][33][34].…”
Section: Quantitative Assessmentmentioning
confidence: 99%
“…Various semi-automated and automated segmentation models have been created [27][28][29][30]. BPE assessment tools using machine learning have also been developed [31][32][33][34]. Overall, many of these approaches have correlated well with qualitative BPE assessment.…”
Section: Quantitative Assessmentmentioning
confidence: 99%
“…Feature extractor hybrid [56] Fine-tuning scratch [17,57] Many [23,25,[58][59][60][61][62][63] Microscopy Pathology Feature extractor [64][65][66][67][68][69][70] Fine-tuning [21] Fine-tuning scratch [71,72] MRI Bones Many [73] Genital systems Feature extractor [14,74] Integumentary system Fine-tuning scratch [75] Many [76] Nervous system Fine-tuning scratch [16,77,78] Many [19,[79][80][81] OCT Integumentary system Feature extractor [82] Cardiovascular system Many [83] Sense organs Feature extractor [84][85][86][87] Feature extractor hybrid [88] Fine-tuning [20] Fine-tuning scratch [29,89] Many [90][91][92] Photography Integumentary system Feature extractor…”
Section: Appendix B Supplementary Datamentioning
confidence: 99%