2023
DOI: 10.1109/access.2023.3244228
|View full text |Cite
|
Sign up to set email alerts
|

LCDEiT: A Linear Complexity Data-Efficient Image Transformer for MRI Brain Tumor Classification

Abstract: Current deep learning-assisted brain tumor classification models sustain inductive bias and parameter dependency problems for extracting texture-based image information. Thereby concerning these problems, the recent development of the vision transformer model has substituted the DL model for classification tasks. However, the high performance of the vision transformer model depends on a large-scale dataset as well as self-attention calculations between the number of image patches which result in a quadratic co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 23 publications
(11 citation statements)
references
References 43 publications
0
11
0
Order By: Relevance
“…Originally, they were networks used for natural language processing (NLP). Their effectiveness in these tasks resulted in the development of transformers such as the Detection Transformer (DETR) for tasks related to vision analysis [193], the Swin-Transformer [61], the Vision Transformer (ViT) [194], and the Data-Efficient Image Transformer (DeiT) [194]. The DETR is dedicated to object detection which also includes manual analytical processes, and it uses CNN to learn 2D representations of the input data (images).…”
Section: Transformersmentioning
confidence: 99%
“…Originally, they were networks used for natural language processing (NLP). Their effectiveness in these tasks resulted in the development of transformers such as the Detection Transformer (DETR) for tasks related to vision analysis [193], the Swin-Transformer [61], the Vision Transformer (ViT) [194], and the Data-Efficient Image Transformer (DeiT) [194]. The DETR is dedicated to object detection which also includes manual analytical processes, and it uses CNN to learn 2D representations of the input data (images).…”
Section: Transformersmentioning
confidence: 99%
“…The methods described above are often applied in the diagnosis of brain tumors and AD and have generally improved diagnostic efficiency through the remarkable performance of Transformers. In these models, the Transformer encoder often implements modular integration, [77,82,84] which not only retains the complete function to stabilize its performance but also obtains different parameter matrices according to specific tasks, improving the expressiveness of downstream tasks. Some dual-branch designs retain CNNs and Transformers to build local and global feature extraction, [79,80] making up for the lack of single feature extraction.…”
Section: Brain Disease Diagnosismentioning
confidence: 99%
“…They crossed vectors to calculate attention weights in different attention heads, flexibly learning more important pathological features. Considering that Transformers rely on large-scale datasets for self-attention calculation, Ferdous et al [82] proposed a Linear-Complexity Data-Efficient image Transformer (LCDEiT), which used the teacher-student strategy and external attention mechanism to gain low-complexity calculation, leading to rapid brain tumor classification. Sarasua et al [83] introduced a spatio-temporal network for 3D anatomical Meshes (TransforMesh) that exploited Transformers to incorporate heterogeneous trajectories, which led to the prediction of neuroanatomical changes.…”
Section: Brain Disease Diagnosismentioning
confidence: 99%
See 2 more Smart Citations