2021
DOI: 10.48550/arxiv.2106.07049
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Weakly-supervised High-resolution Segmentation of Mammography Images for Breast Cancer Diagnosis

Kangning Liu,
Yiqiu Shen,
Nan Wu
et al.

Abstract: In the last few years, deep learning classifiers have shown promising results in image-based medical diagnosis. However, interpreting the outputs of these models remains a challenge. In cancer diagnosis, interpretability can be achieved by localizing the region of the input image responsible for the output, i.e. the location of a lesion. Alternatively, segmentation or detection models can be trained with pixel-wise annotations indicating the locations of malignant lesions. Unfortunately, acquiring such labels … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
1
0
2

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 29 publications
0
1
0
2
Order By: Relevance
“…Importantly, these models are not fine-tuned to any of the datasets. Testing results based on area under the receiver operating characteristic curve (AUC) show that the GMIC model [24] is about 20% better than Faster R-CNN [18] on the NYU reader study test set [31], but Faster R-CNN outperforms GMIC on the INBreast dataset [14] by 4%, and GLAM [12] reaches 85.3% on the NYU test set but only gets 61.2% on INBreast and 78.5% on CMMD [15]. Such experiments show the importance of assessing the ability of models to generalise to testing sets from different populations and with images produced by different machines, compared with the training set.…”
Section: Introductionmentioning
confidence: 99%
“…Importantly, these models are not fine-tuned to any of the datasets. Testing results based on area under the receiver operating characteristic curve (AUC) show that the GMIC model [24] is about 20% better than Faster R-CNN [18] on the NYU reader study test set [31], but Faster R-CNN outperforms GMIC on the INBreast dataset [14] by 4%, and GLAM [12] reaches 85.3% on the NYU test set but only gets 61.2% on INBreast and 78.5% on CMMD [15]. Such experiments show the importance of assessing the ability of models to generalise to testing sets from different populations and with images produced by different machines, compared with the training set.…”
Section: Introductionmentioning
confidence: 99%
“…A ideia é similar aos métodos propostos nesta tese, porém, por ser treinada como um classificador ponta a ponta, a GMIC pode não ser adequada para a classificação na CPB (Subseção 6.2.7). Liu et al [112] propuseram uma arquitetura fracamente supervisionada que segue a mesma linha da GMIC, cujo fluxo é realizado em etapas, entretanto, ainda une as duas arquiteturas na etapa final, chamada mapas de ativações locaisglobais (global-local activation maps, GLAM), também no contexto de detecção de câncer de mama. Primeiramente, eles treinaram a arquitetura global, depois a congelaram e treinaram a arquitetura local com os cortes produzidos pela arquitetura global e, na última etapa, refinaram o treinamento das duas arquiteturas ao mesmo tempo.…”
Section: Trabalhos Informações Relevantes Anounclassified
“…2020 Shen et al [166] Evolução do GMIC para melhorar os resultados e melhorar os tempos de inferência. 2021 Liu et al [112] Evolução do GMIC chamada GLAM. 2021 Luo et al [122] Mapas de ativação para diminuir a influência do plano de fundo no treinamento de imagens com pragas.…”
Section: Trabalhos Informações Relevantes Anounclassified