2022
DOI: 10.1007/s00371-022-02519-w
|View full text |Cite
|
Sign up to set email alerts
|

Cross-resolution feature attention network for image super-resolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 47 publications
0
5
0
Order By: Relevance
“…We also provide a comparison of light-weight image SR models. The comparison methods include CARN, 57 FALSR-A, 58 IMDN, 59 LAPAR-A, 60 LatticeNet, 61 SwinIR-light, 19 Swin2SR-s, 22 ESRT, 62 ELAN-light, 21 SPIN, 63 and CRAFT 64 . Regarding the light-weight DiNAT-SR structure, the RDiNAG number N is set to 4.…”
Section: Methodsmentioning
confidence: 99%
“…We also provide a comparison of light-weight image SR models. The comparison methods include CARN, 57 FALSR-A, 58 IMDN, 59 LAPAR-A, 60 LatticeNet, 61 SwinIR-light, 19 Swin2SR-s, 22 ESRT, 62 ELAN-light, 21 SPIN, 63 and CRAFT 64 . Regarding the light-weight DiNAT-SR structure, the RDiNAG number N is set to 4.…”
Section: Methodsmentioning
confidence: 99%
“…To reduce the impact of background information or distractors, Zhang et al [37] proposed a novel Siamese anchor-free network based on criss-cross attention, obtaining more accurate and robust tracking results. Considering that most attention mechanisms are only processed at a single resolution, Liu et al [38] proposed a cross-resolution feature attention mechanism to progressively reconstruct images at different scale factors. Similarly, we adapt and improve channel and spatial attention mechanisms to enhance reconstruction performance.…”
Section: Attention Mechanismmentioning
confidence: 99%
“…In recent years, researchers have proposed many lightweight SR algorithms that aim to provide high‐quality reconstruction results while maintaining low computational complexity. For example, enhanced deep SR [26] and residual channel attention networks [27] utilize residual structures and channel attention mechanisms to improve image reconstruction quality. These methods maintain relatively low computational complexity while improving SR performance.…”
Section: Related Workmentioning
confidence: 99%