2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2022
DOI: 10.1109/cvprw56347.2022.00061
|View full text |Cite
|
Sign up to set email alerts
|

Transformer for Single Image Super-Resolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
140
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 333 publications
(141 citation statements)
references
References 24 publications
0
140
0
1
Order By: Relevance
“…Further, Nie et al [66] recently proposed depthwise separable convolution to improve the speed up of SR architecture. Recently, the transformers for natural language processing has further prompted researchers to adapt it for computer vision applications, including superresolution [67]- [70] where it is noted promising to improve accuracy. However, even in such frameworks, a SR model is generally trained using a supervised manner with LR images created by bicubic down-sampling.…”
Section: B Supervised Sr Methodsmentioning
confidence: 99%
“…Further, Nie et al [66] recently proposed depthwise separable convolution to improve the speed up of SR architecture. Recently, the transformers for natural language processing has further prompted researchers to adapt it for computer vision applications, including superresolution [67]- [70] where it is noted promising to improve accuracy. However, even in such frameworks, a SR model is generally trained using a supervised manner with LR images created by bicubic down-sampling.…”
Section: B Supervised Sr Methodsmentioning
confidence: 99%
“…Recently, vision transformers applied to SR have also been proposed, such as the Encoder-Decoder-based Transformer (EDT) [46], Efficient SR Transformer (ESRT) [47], and the Swin Image Restoration (SwinIR) [11] approach that is based on the Swin Transformer [48]. Approaches such as Efficient Long-Range Attention Network (ELAN) [12] and Hybrid Attention Transformer (HAT) [49], which attempt to combine CNN and transformer architectures, have also been proposed with further improvements in SR performance.…”
Section: Non-blind Sr Methodsmentioning
confidence: 99%
“…NAS and HW-NAS for CNN models have found their way from a simple task on a small dataset to a complex application on a gigantic dataset. However, the Transformer NAS methods have not been specialized for a wide variety of tasks and datasets, even though the manual design of Transformers has been used in diverse tasks such as Super Resolution [244], Semantic Segmentation [245], Medical Image Segmentation [246], etc. Also, the algorithmic breakthrough of search algorithms allowed the use for other purposes, such as Winograd Convolution Search [247], Mixed Precision Quantization Search [123], Fault-Tolerant network search [248], etc.…”
Section: G Applications and Purposesmentioning
confidence: 99%