2023
DOI: 10.1016/j.compbiomed.2023.106812
|View full text |Cite
|
Sign up to set email alerts
|

FDTrans: Frequency Domain Transformer Model for predicting subtypes of lung cancer using multimodal data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 28 publications
0
3
0
Order By: Relevance
“…AMIGO [ 3 ] created a multi-modal graph transformer architecture that predicts patient survival based on multi-modal histopathological images and shared related data. Cai et al [ 60 ] created a frequency-domain transformer architecture that integrates frequency and spatial domains for histopathological lung cancer image analysis and subtype determination.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…AMIGO [ 3 ] created a multi-modal graph transformer architecture that predicts patient survival based on multi-modal histopathological images and shared related data. Cai et al [ 60 ] created a frequency-domain transformer architecture that integrates frequency and spatial domains for histopathological lung cancer image analysis and subtype determination.…”
Section: Discussionmentioning
confidence: 99%
“…For breast cancer histopathological image classification, DCET-Net [ 72 ] proposed a dual-stream convolution-expanded transformer architecture; Breast-Net [ 51 ] explores the ability of ensemble learning techniques using four Swin transformer architectures; HATNet [ 52 ] uses end-to-end vision transformers with a self-attention mechanism; ScoreNet [ 16 ] developed an efficient transformer-based architecture that integrates a coarse-grained global attention framework with a fine-grained local attention mechanism framework; LGVIT [ 73 ] built a local–global ViT model by introducing a new local–global MHSA mechanism and a ghost geed-forward network block into the network; dMIL-transformer [ 53 ] developed a two-stage double max–min multiple-instance learning (MIL) transformer architecture that combines both the spatial and morphological information of the cancer regions. Other than breast cancer classification, transformers have also been applied to other histopathological image cancer classification tasks, such as bone cancer classification (NRCA-FCFL [ 74 ]), brain cancer classification (ViT-WSI [ 17 ], ASI-DBNet [ 54 ], Ding et al [ 55 ]), colorectal cancer classification (MIST [ 75 ], DT-DSMIL [ 56 ]), gastric cancer classification (IMGL-VTNet [ 57 ]), kidney subtype classification (i-ViT [ 59 ], tRNAsformer [ 58 ]), thymoma or thymic carcinoma classification (MC-ViT [ 76 ]), lung cancer classification (GTP [ 46 ], FDTrans [ 60 ]), skin cancer classification (Wang et al [ 45 ]), and thyroid cancer classification (Wang et al [ 77 ], PyT2T-ViT [ 41 ], Wang et al [ 78 ]) using different transformer-based architectures. Furthermore, other transformer models such as Transmil [ 65 ], KAT [ 61 ], ViT-based unsupervised contrastive learning architecture [ 79 ], DecT [ 66 ], StoHisNet [ 80 ], CWC-transformer [ 63 ], LA-MIL [ 44 ], SETMIL [ 81 ], Prompt-MIL [ 67 ], GLAMIL [ 67 ], MaskHIT [ 82 ], HAG-MIL [ 68 ], MEGT [ 47 ], MSPT [ 70 ], and HistPathGPT [ 69 ] have also been evaluated on more than one tissue type, such as liver, prostate, breast, brain, gastric, kidney, lung, colorectal, and so on, for h...…”
Section: Current Progressmentioning
confidence: 99%
“…Despite the progress made by existing methods, challenges remain due to limited annotated datasets, large intra-class differences, and high inter-class similarities. To address these challenges, Cai et al [70] proposed a dual-branch deep learning model called the Frequency Domain Transformer Model (FDTrans). FDTrans combines image domain and genetic information to determine lung cancer subtypes in patients.…”
Section: Deep Learning Techniques For Lung Cancer Using Tcga Datasetmentioning
confidence: 99%