2021
DOI: 10.1016/j.neucom.2020.11.010
|View full text |Cite
|
Sign up to set email alerts
|

Residual scale attention network for arbitrary scale image super-resolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 35 publications
(7 citation statements)
references
References 44 publications
0
7
0
Order By: Relevance
“…However, using a single simple scale information to condition the entire SR network would restrict the performance. The architecture of Meta-SR has been further explored and improved in some subsequent works (e.g., ArbRCAN [48], RSI-HFAS [49], RSAN [50]). They often design a scale parser module to term the magnification factor as a network conditional input, or an upsampling module to dynamically resize the feature map according to the magnification.…”
Section: B Image Sr With Continuous Magnificationmentioning
confidence: 99%
“…However, using a single simple scale information to condition the entire SR network would restrict the performance. The architecture of Meta-SR has been further explored and improved in some subsequent works (e.g., ArbRCAN [48], RSI-HFAS [49], RSAN [50]). They often design a scale parser module to term the magnification factor as a network conditional input, or an upsampling module to dynamically resize the feature map according to the magnification.…”
Section: B Image Sr With Continuous Magnificationmentioning
confidence: 99%
“…The compared models for specific scale factors are DBPN [13], GFSR [19], [54], SAN [55], HAN [56], RCAN [15], NLSA [57], FSDN [25], IPT [17], and EGADNet [12]. The compared models trained for arbitrary scale factors are ASDN [25], Meta-RDN [22], ArbRCAN [28], RSAN [24], LIIF [27], fSISR [30], and RPB [29]. The scale factors considered for comparisons with state-of-the-art models are ×2, ×3, and ×4.…”
Section: B Testing Detailsmentioning
confidence: 99%
“…However, this additional information is not used by the rest of the model, i.e., the backbone parameters of the model are the same for all scale factors for which the model was trained. Fu et al [24] extend the meta module by combining its original linear input vector with a quadratic and a bicubic encoded vector. This approach leads to a higher complexity of the encoding vector as the degree increases with no necessary a better performance of the proposal.…”
Section: Introductionmentioning
confidence: 99%
“…Meta‐SR [HMZ19] applies meta‐learning to predict the weights of filters for different scale factors; however, it does not exploit scale information during feature learning. To solve this problem, the RSAN [YFL21] introduces a residual scale attention network that is employed as prior knowledge to learn discriminative features.…”
Section: Related Workmentioning
confidence: 99%