2022
DOI: 10.1016/j.patter.2022.100498
|View full text |Cite
|
Sign up to set email alerts
|

Multi-domain integrative Swin transformer network for sparse-view tomographic reconstruction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
19
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 45 publications
(19 citation statements)
references
References 45 publications
0
19
0
Order By: Relevance
“…developed a multi-domain integrative Swin transformer network (MIST-net) for sparse-view reconstruction. 56 Furthermore, the Swin transformer was used for MRI reconstruction. 57 How to stabilize transformer-based deep reconstruction networks is also important.…”
Section: Discussionmentioning
confidence: 99%
“…developed a multi-domain integrative Swin transformer network (MIST-net) for sparse-view reconstruction. 56 Furthermore, the Swin transformer was used for MRI reconstruction. 57 How to stabilize transformer-based deep reconstruction networks is also important.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, the transformer [ 28 , 29 ] is widely used in the field of image processing because of its ability to better access information and integrate the CNN and transformer. Pan et al [ 23 ] proposed a high-quality reconstruction transformer to capture image global features for medical CT image reconstruction The use of SR in the field of medical CT imaging: DL technology is extensively employed for medical CT imaging [ 30 32 ]. Many scholars have applied SR technology to the medical field [ 33 35 ].…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, the transformer [ 28 , 29 ] is widely used in the field of image processing because of its ability to better access information and integrate the CNN and transformer. Pan et al [ 23 ] proposed a high-quality reconstruction transformer to capture image global features for medical CT image reconstruction…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…For the first time, Vision Transformer (ViT) divides an image into a sequence of non-overlap patches, analyzes them as a sequence of elements similar to words, and produces state-of-the-art results demonstrating the effectiveness and superiority in image classification (Dosovitskiy et al;. Since then, ViT has been successfully applied to various other vision tasks including medical imaging Pan et al (2021) and medical image analysis (Lyu et al;. However, the performance of the original ViT relies on a large labelled image dataset including 300 millions images, and the conventional wisdom is that the transformers do not generalize well if they are trained on insufficient amounts of data.…”
Section: Introductionmentioning
confidence: 99%