2022
DOI: 10.1007/978-3-031-17979-2_3
|View full text |Cite
|
Sign up to set email alerts
|

Multi-scale Deformable Transformer for the Classification of Gastric Glands: The IMGL Dataset

Abstract: Gastric cancer is one of the most common cancers and a leading cause of cancer-related death worldwide. Among the risk factors of gastric cancer, the gastric intestinal metaplasia (IM) has been found to increase the risk of gastric cancer and is considered as one of the precancerous lesions. Therefore, early detection of IM could allow risk stratification regarding the possibility of progression to cancer. To this end, accurate classification of gastric glands from the histological images plays an important ro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…Figure 9 shows some examples of SOTA transformer architectures developed for histopathological image classification. DT-DSMIL [56]), gastric cancer classification (IMGL-VTNet [57]), kidney subtype classification (i-ViT [59], tRNAsformer [58]), thymoma or thymic carcinoma classification (MC-ViT [76]), lung cancer classification (GTP [46], FDTrans [60]), skin cancer classification (Wang et al [45]), and thyroid cancer classification (Wang et al [77], PyT2T-ViT [41], Wang et al [78]) using different transformer-based architectures. Furthermore, other transformer models such as Transmil [65], KAT [61], ViT-based unsupervised contrastive learning architecture [79], DecT [66], StoHisNet [80], CWC-transformer [63], LA-MIL [44], SETMIL [81], Prompt-MIL [67], GLAMIL [67], MaskHIT [82], HAG-MIL [68], MEGT [47], MSPT [70], and HistPathGPT [69] have also been evaluated on more than one tissue type, such as liver, prostate, breast, brain, gastric, kidney, lung, colorectal, and so on, for histopathological image classification using different transformer approaches.…”
Section: Histopathological Image Classificationmentioning
confidence: 99%
“…Figure 9 shows some examples of SOTA transformer architectures developed for histopathological image classification. DT-DSMIL [56]), gastric cancer classification (IMGL-VTNet [57]), kidney subtype classification (i-ViT [59], tRNAsformer [58]), thymoma or thymic carcinoma classification (MC-ViT [76]), lung cancer classification (GTP [46], FDTrans [60]), skin cancer classification (Wang et al [45]), and thyroid cancer classification (Wang et al [77], PyT2T-ViT [41], Wang et al [78]) using different transformer-based architectures. Furthermore, other transformer models such as Transmil [65], KAT [61], ViT-based unsupervised contrastive learning architecture [79], DecT [66], StoHisNet [80], CWC-transformer [63], LA-MIL [44], SETMIL [81], Prompt-MIL [67], GLAMIL [67], MaskHIT [82], HAG-MIL [68], MEGT [47], MSPT [70], and HistPathGPT [69] have also been evaluated on more than one tissue type, such as liver, prostate, breast, brain, gastric, kidney, lung, colorectal, and so on, for histopathological image classification using different transformer approaches.…”
Section: Histopathological Image Classificationmentioning
confidence: 99%
“…By leveraging the strengths of both deep learning and multidimensional texture analysis, the authors aimed to achieve improved results in early fire detection. More recently, vision transformers [18][19][20][21][22] inspired by the deep learning model that was developed for natural language processing [23] have been employed for various applications as well as fire detection and classification of fire. More specifically, attention layers have been utilized in different ways by vision transformers.…”
Section: Introductionmentioning
confidence: 99%
“…More specifically, attention layers have been utilized in different ways by vision transformers. For example, Barmpoutis et al [20] investigated the use of a spatial and multiscale feature enhancement module; Xu et al [21] designed a fused axial attention module capturing local and global spatial interactions; and Tu et al [22] introduced a multiaxis attention model which utilizes global-local spatial interactions. Focusing on fire detection, Ghali et al [24] used two vision-based transformers, namely, TransUNet and MedT, extracting both global and local features in order to reduce fire-pixel misclassifications.…”
Section: Introductionmentioning
confidence: 99%