2023
DOI: 10.1016/j.ccell.2023.08.002
|View full text |Cite
|
Sign up to set email alerts
|

Transformer-based biomarker prediction from colorectal cancer histology: A large-scale multicentric study

Sophia J. Wagner,
Daniel Reisenbüchler,
Nicholas P. West
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
28
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 67 publications
(28 citation statements)
references
References 57 publications
0
28
0
Order By: Relevance
“…TransMIL for whole-slide image classification was investigated in our recent study 53 and in a study by Wagner et al 54 . For a given cytological image we first split it into multiple 224 × 224 image patches.…”
Section: Methodsmentioning
confidence: 99%
“…TransMIL for whole-slide image classification was investigated in our recent study 53 and in a study by Wagner et al 54 . For a given cytological image we first split it into multiple 224 × 224 image patches.…”
Section: Methodsmentioning
confidence: 99%
“…The H&E-based one-arm model took as input the bag of 180µm patch-level features of size 3,456 extracted from EsVIT 45 where the number of patches per bag varies. We subsequently adapted the state-of-art WSI classification architectures 13,20 for our distant recurrence prediction task. Given a batch size of one, the time scale was discretized into four intervals based on the quartiles of the distribution of uncensored patients and the negative log likelihood loss were used 46 .…”
Section: Methodsmentioning
confidence: 99%
“…Given a batch size of one, the time scale was discretized into four intervals based on the quartiles of the distribution of uncensored patients and the negative log likelihood loss were used 46 . Our ablation study on H&E-based one-arm model showed that the attention-based multiple instance learning model (AttnMIL) 19 outperformed spatial context-aware architectures including graph attention network 13 and transformers 20 while featuring far lower computational complexity (Supplementary Table 3).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Self-supervised learning (SSL) on the other hand, has gained significantly increasing attention for its capacity to automatically capture image features from unlabeled data[9]. Applications of SSL models have demonstrated superior performance in various cancer classification and survival prediction tasks compared to traditional supervised learning models[14, 15, 16, 17]. Barlow Twins, an SSL model designed to learn non-redundant image features, has several advantages over other SSL learning models (e.g.…”
Section: Introductionmentioning
confidence: 99%