2022
DOI: 10.1016/j.cmpb.2022.106924
|View full text |Cite
|
Sign up to set email alerts
|

StoHisNet: A hybrid multi-classification model with CNN and Transformer for gastric pathology images

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(10 citation statements)
references
References 24 publications
0
10
0
Order By: Relevance
“…The past two years have seen a surge in popularity of transformer modeling for common computational pathology tasks such as WSI segmentation [40,43,44] and histology image classification [41,[45][46][47]. Transformers have also been used for pathologist-level question-answering from histological imaging [39], predicting pathologists' visual attention [48], and for pathology text mining [49].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The past two years have seen a surge in popularity of transformer modeling for common computational pathology tasks such as WSI segmentation [40,43,44] and histology image classification [41,[45][46][47]. Transformers have also been used for pathologist-level question-answering from histological imaging [39], predicting pathologists' visual attention [48], and for pathology text mining [49].…”
Section: Introductionmentioning
confidence: 99%
“…Transformers have also been used for pathologist-level question-answering from histological imaging [39], predicting pathologists' visual attention [48], and for pathology text mining [49]. Nearly all applications of transformer-based approaches to whole slide imaging implement vision transformers (ViT), including recent works that combine CNNs and transformers [45]. In contrast, our multi-stage pipeline extracts expert-defined features from segmented structures before deriving WSI context using transformer encoders.…”
Section: Introductionmentioning
confidence: 99%
“…Before this paper, research on HI classification based on deep learning has utilized various network structures, such as modified convolutional neural networks (CNN) (Hameed et al 2020, Wu et al 2020, Aatresh et al 2021, Elmannai et al 2021, Chattopadhyay et al 2022, Transformer networks (Tummala et al 2022), and hybrid architectures (Zou et al 2021, Fu et al 2022. However, we observed that all these methods to some extent overlook certain information present in HI.…”
Section: Introductionmentioning
confidence: 99%
“…CNNs tend to pay less attention to global information, while Transformer networks struggle to capture local information. Combining the two types of neural networks may resolve this problem, but previous works such as (Zou et al 2021, Fu et al 2022 only combined the highlevel features of the two networks while neglecting some low-level information communication. Additionally, previous works have primarily focused on evolving network structures while disregarding the characteristics of HIs.…”
Section: Introductionmentioning
confidence: 99%
“…), Sub-database C (80 × 80 pixels). GasHisSDB provides the ability to distinguish between classical machine learning classifier performance and deep learning classifier performance ( 13 ). Details are given in Section 2.1.…”
Section: Introductionmentioning
confidence: 99%