2020
DOI: 10.48550/arxiv.2005.12991
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Kernel Self-Attention in Deep Multiple Instance Learning

Abstract: Multiple Instance Learning (MIL) is weakly supervised learning, which assumes that there is only one label provided for the entire bag of instances. As such, it appears in many problems of medical image analysis, like the whole-slide images classification of biopsy. Most recently, MIL was also applied to deep architectures by introducing the aggregation operator, which focuses on crucial instances of a bag. In this paper, we enrich this idea with the self-attention mechanism to take into account dependencies a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…Image classification network. To aggregate patches' representations of an image, we apply attention-based multiple instance learning, AbMIL [10,17], a type of weighted average pooling, where the neural network determines the weights of representations. More formally, if…”
Section: Methodsmentioning
confidence: 99%
“…Image classification network. To aggregate patches' representations of an image, we apply attention-based multiple instance learning, AbMIL [10,17], a type of weighted average pooling, where the neural network determines the weights of representations. More formally, if…”
Section: Methodsmentioning
confidence: 99%
“…3, S1). These are further cross-correlated in the self-attention 26 module, to account for dependencies across the image and to create context-aware embeddings from every patch. The subsequent attention-based aggregation module 27 assigns a score of relative importance to every patch and combines them into a single representation.…”
Section: Pcai Design Rationalementioning
confidence: 99%
“…High recall is especially important for the case of cancer detection since fail to detect the presence of cancer can lead to sever consequence. We ran the experiment 5 times each with a 10fold cross-validation and summarized the results compared to reports in [8] and [14] in Tab. 6 and Tab.…”
Section: Histopathological Datasetmentioning
confidence: 99%
“…The attention score reflect how much an instance is likely to be the key instance that trigger the bag classifier. Later on, [14,18,12] proposed to use self-attention mechanism [17] to further consider the the dependencies between instance embeddings. However, computing the self-attention matrix across all instance embeddings in a bag is computationally complex and might yield redundant information that does not contributes useful supervising signal.…”
Section: Introductionmentioning
confidence: 99%