2020
DOI: 10.1007/978-3-030-59722-1_50
|View full text |Cite
|
Sign up to set email alerts
|

Multiple Instance Learning with Center Embeddings for Histopathology Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
67
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 86 publications
(67 citation statements)
references
References 21 publications
0
67
0
Order By: Relevance
“…(2019) 0.803 0.796 0.767 0.85 0.742 DeepAttentionMIL Ilse et al. (2018) 0.859 0.875 0.861 0.75 1 JointMIL Chikontwe et al. (2020) 0.901 0.909 0.896 0.85 0.968 Zhang3DCNN Zhang et al.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…(2019) 0.803 0.796 0.767 0.85 0.742 DeepAttentionMIL Ilse et al. (2018) 0.859 0.875 0.861 0.75 1 JointMIL Chikontwe et al. (2020) 0.901 0.909 0.896 0.85 0.968 Zhang3DCNN Zhang et al.…”
Section: Resultsmentioning
confidence: 99%
“…(2019) 0.845 0.852 0.836 0.8 0.903 DeepAttentionMIL Ilse et al. (2018) 0.845 0.859 0.845 0.75 0.968 JointMIL Chikontwe et al. (2020) 0.845 0.837 0.814 0.9 0.774 D A -CMIL (w/o ) 0.718 0.728 0.714 0.65 0.806 D A -CMIL (w/o ) 0.873 0.88 0.866 0.825 0.935 D A -CMIL (w/ ) 0.958 0.955 0.951 0.975 0.935 …”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Experimental results show that SA-Transformer-Network outperforms state-of-the-art methods by a significant margin. For example, SA-Transformer-Network is 8% more accurate than the method proposed by Chikontwe et al [7] and 6% more accurate than the method proposed by Hashimoto et al [8]. Importantly, SA-Transformer-Network delivers comparable performance to 187 practicing pathologists who interpreted the same test set cases in an independent study.…”
Section: Introductionmentioning
confidence: 83%
“…ChikonMIL. The method of Chikontwe et al (Chikon-MIL) (R3 in Table 2) [7] first selects the top-k patches, and then uses these patches for instance-and bag-representation learning. This method also uses a center loss that reduces intra-class variability and a soft assignment to learned diagnostic centroid for final diagnosis.…”
Section: Sa-transformer-network's Performance Is Compared With Five Recent Whole Slide Image Classification Methodsmentioning
confidence: 99%