2022
DOI: 10.1007/978-3-031-18814-5_3
|View full text |Cite
|
Sign up to set email alerts
|

Cross-Scale Attention Guided Multi-instance Learning for Crohn’s Disease Diagnosis with Pathological Images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(15 citation statements)
references
References 29 publications
2
13
0
Order By: Relevance
“…Although this comparison may seem unfair (comparing SAM's zero-shot performance with fully-trained medical image models), SAM's zero-shot performance has been shown to outperform fully-trained models on nature image datasets. This suggests that SAM has worse zero-shot transfer capability on medical images, which has also been observed in many other studies [9,16,31].…”
Section: Resultssupporting
confidence: 80%
“…Although this comparison may seem unfair (comparing SAM's zero-shot performance with fully-trained medical image models), SAM's zero-shot performance has been shown to outperform fully-trained models on nature image datasets. This suggests that SAM has worse zero-shot transfer capability on medical images, which has also been observed in many other studies [9,16,31].…”
Section: Resultssupporting
confidence: 80%
“…Deng et al [16] proposed a novel cross-scale MIL algorithm (CS-MIL) that explicitly aggregates inter-scale relationships into a single MIL network for pathology image diagnosis. In addition, the proposed method utilizes cross-scale attention scores to generate importance maps, which enhances the interpretability and comprehensibility of the CS-MIL model.…”
Section: Wsi Exploration Of Idh Gliomas Using a Multiple Instance Lea...mentioning
confidence: 99%
“…Deng et al (2023) [16] TCGA TCGA (n = 613) AUC (0.7737) CS-MIL No Loeffler et al (2022) [17] TCGA TCGA (n = 680) AUC (0.764) DenseNet No Wang et al (2023) [18] TCGA TCGA (n = 940) AUC (86.4) HMT-MIL (Customised architecture) No Faust et al (2022) [19] University of Toronto University of Toronto (n = 47) Accuracy (99.3%) VGG19 No Fang et al (2023) [20] TCGA Xiangya Hospital TCGA (n = 844) Xiangya (n = 116) AUC (0.827 ± 0.0465) Multi-Beholder (Customised architecture)…”
Section: Nomentioning
confidence: 99%
“…To achieve this, the SAM [53] is created. SAM leverages prompt engineering to tackle general downstream segmentation tasks by utilizing the prompt segmentation task as a pre-training objective [97]. To enhance the model's flexibility in adapting to prompts and to improve its robustness against interference, SAM is divided into three components: the image encoder, the prompt encoder, and the mask decoder.…”
Section: Foundation Modelsmentioning
confidence: 99%