2021
DOI: 10.1109/access.2021.3069775
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight Attended Multi-Scale Residual Network for Single Image Super-Resolution

Abstract: Recently, deep convolutional neural networks (CNN) have been widely applied in the single image super-resolution (SISR) task and achieved significant progress in reconstruction performance. However, most of the existing CNN-based SR models are impractical to real-world applicants due to numerous parameters and heavy computation. To tackle this issue, we propose a lightweight attended multiscale residual network (LAMRN) in this work. Specially, we present an attended multi-scale residual block (AMSRB) to extrac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 36 publications
0
5
0
Order By: Relevance
“…Zhang et al [66] first incorporated SE [20] with SR and pushed the stateof-the-art performance of SISR. More recent works, such as [9,21,29,43,44,58,59,61], extend this idea by adopting different spatial attention mechanisms or designing advanced attention blocks.…”
Section: Attention Based Sr Methodsmentioning
confidence: 99%
“…Zhang et al [66] first incorporated SE [20] with SR and pushed the stateof-the-art performance of SISR. More recent works, such as [9,21,29,43,44,58,59,61], extend this idea by adopting different spatial attention mechanisms or designing advanced attention blocks.…”
Section: Attention Based Sr Methodsmentioning
confidence: 99%
“…Compared with CNN, transformer has a better global modeling capability. Inspired by the significant achievements of multi-scale processing in the domains of image restoration [40][41][42] and 3D shape representation, 43 we have introduced the concept of multi-scale into our network. This enhancement allows TMSDNet to further explore the deep multi-scale representation of the object in the voxel feature domain by extracting multi-scale global voxel features from the voxel features output by the combined-transformer block.…”
Section: Overviewmentioning
confidence: 99%
“…On the other hand, so-called lightweight SR models, aiming to reduce the number of parameters and FLOPs while maintaining performance, is actively being studied [8,9,10,11,22,23,37,38,39]. For instance, FSRCNN [22] utilized deconvolution layers to achieve faster processing speed compared to SRCNN.…”
Section: B Lightweight Super-resolution Modelsmentioning
confidence: 99%