2023
DOI: 10.1007/s11063-023-11213-4
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight Image Super-Resolution with ConvNeXt Residual Network

Abstract: Single image super-resolution (SISR) based on convolutional neural networks has been very successful in recent years. However, as the computational cost is too high, making it di cult to apply to resource-constrained devices, a big challenge for existing approaches is to nd a balance between the complexity of the CNN model and the quality of the resulting SR. To solve this problem, various lightweight SR networks have been proposed. In this paper, we propose lightweight and e cient residual networks (IRN), whi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 47 publications
0
2
0
Order By: Relevance
“…To demonstrate the capacity and superiority of MAFDN, representative lightweight SISR methods and recent proposed state-of-the-art models are compared, i.e., SRCNN [19], PAN [74], RFDN [58], FDIWN [38], DDistill-SR [39], PILN [3], JSNet [70], DiVANet [59], EMASRN [60], DRSAN [40], ESRT [67], MMSR [68], HPUN [66], FDSCSR [64], LESR [62], IRN [63], AFAN [61], and FRN [65]. Overall, these competitors cover attention-based, feature distillationbased, and NAS-based SISR models.…”
Section: Lightweight Sisr Methods For Comparisonsmentioning
confidence: 99%
See 1 more Smart Citation
“…To demonstrate the capacity and superiority of MAFDN, representative lightweight SISR methods and recent proposed state-of-the-art models are compared, i.e., SRCNN [19], PAN [74], RFDN [58], FDIWN [38], DDistill-SR [39], PILN [3], JSNet [70], DiVANet [59], EMASRN [60], DRSAN [40], ESRT [67], MMSR [68], HPUN [66], FDSCSR [64], LESR [62], IRN [63], AFAN [61], and FRN [65]. Overall, these competitors cover attention-based, feature distillationbased, and NAS-based SISR models.…”
Section: Lightweight Sisr Methods For Comparisonsmentioning
confidence: 99%
“…To make a good balance between the computational cost and reconstruction quality, Zhang et al [63] simplifed feature aggregation by using residual modules for feature learning. Wang et al [64] proposed to reduce repetitive feature information via feature deredundancy and selfcalibration, thus enhancing model efciency.…”
Section: Lightweight Cnn For Complexity-oriented Sisrmentioning
confidence: 99%