2020
DOI: 10.1007/978-3-030-67070-2_3
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Image Super-Resolution Using Pixel Attention

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
116
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 273 publications
(151 citation statements)
references
References 36 publications
0
116
0
Order By: Relevance
“…The BI degradation model has been widely used to obtain LR images in the image SR tasks. In order to demonstrate the effectiveness of the RTAN, we compared it with 16 state-of-the-art CNN-based SR methods, including SRMDNF [7], NLRN [32], EDSR [17], DBPN [73], NDRCN [35], ACNet [38], FALSR-A [37], OISR-RK2-s [34], MCAN [47], A 2 F-SD [48], A2N-M [63], DeFiAN S [61], IMDN [33], SMSR [36], PAN [59], MGAN [62], RNAN [55].…”
Section: Results With Bicubic (Bi) Degradation Modelmentioning
confidence: 99%
See 2 more Smart Citations
“…The BI degradation model has been widely used to obtain LR images in the image SR tasks. In order to demonstrate the effectiveness of the RTAN, we compared it with 16 state-of-the-art CNN-based SR methods, including SRMDNF [7], NLRN [32], EDSR [17], DBPN [73], NDRCN [35], ACNet [38], FALSR-A [37], OISR-RK2-s [34], MCAN [47], A 2 F-SD [48], A2N-M [63], DeFiAN S [61], IMDN [33], SMSR [36], PAN [59], MGAN [62], RNAN [55].…”
Section: Results With Bicubic (Bi) Degradation Modelmentioning
confidence: 99%
“…In order to demonstrate the powerful reconstruction ability of the proposed method with BD degradation model, we compare the RTAN with 14 state-of-the-art CNN-based models, i.e., SPMSR [4], SRCNN [5], FSRCNN [74], VDSR [12], SRMD [7], EDSR [17], RDN [16], IRCNN [75], SRFBN [76], RCAN [6], A 2 F-SD [48], IMDN [33], DeFiAN S [61], PAN [59], and MGAN [62].…”
Section: Results With Blur-downscale (Bd) Degradation Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…In the earlier densely connected structures, such as DenseNet [47] and SRDenseNet [48], the output results of each level of convolution or modules are directly added or concatenated as the input of the next level. In order to better integrate the output results of all levels of modules to assist in the reconstruction of the final features, when the bypass connection is established, inspired by PAN [49], we propose the Multi-Level Feature Fusion Block (MLFFB) based on the spatial attention mechanism, which can generate more detailed features. The network structure of the MLFFB is illustrated in Figure 4.…”
Section: Multi-level Feature Fusion Blockmentioning
confidence: 99%
“…As shown in Fig. 2 (c), our CSRM is a multi-branch structure with pixel-attention modules [20] in each branch and contains several channel attention modules between two branches. We employ a 1×1 convolution layer at the beginning to reduce feature dimension by half.…”
Section: (B)mentioning
confidence: 99%