2022
DOI: 10.48550/arxiv.2210.08979
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Interactive Interpretability System for Breast Cancer Screening with Deep Learning

Abstract: Figure 1: Overview of our interface. (1) potentially malignant patches identified by a patch-based model on the full mammogram; (2) context of a selected patch in the original mammogram; (3) place for users to query model representation with salient regions; (4) visualization of neurons based on their learned representation; (5) place for user to annotate neurons' semantic meaning; (6) explainability reports generated based on neuron annotations.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…In the proposed work, we also adopt a similar neuron-level semantic modeling approach. However, instead of relying on the human annotation of objects in the image as in [10,12], we leverage the recently CLIP multimodal embeddings [13,14]. Discovering semantically aligned neurons from a StyleGAN will naturally enable us to produce an arbitrarily large "pseudo"-labeled dataset.…”
Section: Related Workmentioning
confidence: 99%
“…In the proposed work, we also adopt a similar neuron-level semantic modeling approach. However, instead of relying on the human annotation of objects in the image as in [10,12], we leverage the recently CLIP multimodal embeddings [13,14]. Discovering semantically aligned neurons from a StyleGAN will naturally enable us to produce an arbitrarily large "pseudo"-labeled dataset.…”
Section: Related Workmentioning
confidence: 99%