2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00428
|View full text |Cite
|
Sign up to set email alerts
|

Embedded Block Residual Network: A Recursive Restoration Model for Single-Image Super-Resolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
59
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 123 publications
(59 citation statements)
references
References 28 publications
0
59
0
Order By: Relevance
“…Liu et al [29] proposed a kind of non-local module to capture deep feature correlations between each location and its neighborhood and employed the recurrent neural network structure for deep feature propagation. Qiu et al [30] proposed an embedded block residual network where different modules restore the information of different frequencies for a texture SR. Hu et al [31] proposed a channel-wise and spatial feature modulation network where LR features can be transformed to high informative features using feature-modulation memory modules. Jing et al [32] took the LR image and its downsampled resolution (DR) and upsampled resolution (UR) versions as inputs and learned the internal structure coherence with the pairs of UR-LR and LR-DR to generate a hierarchical dictionary.…”
Section: Related Workmentioning
confidence: 99%
“…Liu et al [29] proposed a kind of non-local module to capture deep feature correlations between each location and its neighborhood and employed the recurrent neural network structure for deep feature propagation. Qiu et al [30] proposed an embedded block residual network where different modules restore the information of different frequencies for a texture SR. Hu et al [31] proposed a channel-wise and spatial feature modulation network where LR features can be transformed to high informative features using feature-modulation memory modules. Jing et al [32] took the LR image and its downsampled resolution (DR) and upsampled resolution (UR) versions as inputs and learned the internal structure coherence with the pairs of UR-LR and LR-DR to generate a hierarchical dictionary.…”
Section: Related Workmentioning
confidence: 99%
“…With a different focus from these, Wang et al [53] analyse multiple Gaussian degradations in an attempt to reduce the reconstruction error in real-world data, Qin et al [54] combine the ideas of the channel, and spatial attention for building a deep multilevel residual attention network and lastly, Wu et al [55] come up with a novel perceptual loss for upsampling. Though most recent, however, none of these techniques were able to cross the benchmark established by an earlier but current state-of-the-art method [39].…”
Section: Prior Workmentioning
confidence: 99%
“…Similar to the methods [1], [22], [28], [39], [62], WDN is also trained on the DIV2K dataset by Timofte et al [65]. DIV2K dataset contains 1000 images at 2K resolution, among which 800 are for training, and 100 are for validation.…”
Section: Experiments and Analysismentioning
confidence: 99%
See 2 more Smart Citations