2021
DOI: 10.3390/electronics10172072
|View full text |Cite
|
Sign up to set email alerts
|

Residual Triplet Attention Network for Single-Image Super-Resolution

Abstract: Single-image super-resolution (SISR) techniques have been developed rapidly with the remarkable progress of convolutional neural networks (CNNs). The previous CNNs-based SISR techniques mainly focus on the network design while ignoring the interactions and interdependencies between different dimensions of the features in the middle layers, consequently hindering the powerful learning ability of CNNs. In order to address this problem effectively, a residual triplet attention network (RTAN) for efficient interac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 63 publications
(114 reference statements)
0
4
0
Order By: Relevance
“…Hui et al [15] designed the information multi-distillation network (IMDN) for better purifying each processing feature by explicitly splitting the preceding features into two segments, whereby one part is maintained and the other is further refined, streamlining the network parameters and boosting the reconstruction performance. RTAN [21] is a novel lightweight residual triplet attention module used to obtain the cross-dimensional attention weights of the features. LW-AWSRN [22] is a novel local fusion block consisting of adaptive weighted residual units and the local residual fusion unit to remove the redundancy scale branch.…”
Section: Lightweight Cnns For Srmentioning
confidence: 99%
“…Hui et al [15] designed the information multi-distillation network (IMDN) for better purifying each processing feature by explicitly splitting the preceding features into two segments, whereby one part is maintained and the other is further refined, streamlining the network parameters and boosting the reconstruction performance. RTAN [21] is a novel lightweight residual triplet attention module used to obtain the cross-dimensional attention weights of the features. LW-AWSRN [22] is a novel local fusion block consisting of adaptive weighted residual units and the local residual fusion unit to remove the redundancy scale branch.…”
Section: Lightweight Cnns For Srmentioning
confidence: 99%
“…The direct data processing technique involves immediately applying the graph convolution analysis approach to the 3D point cloud data without first subjecting it to voxel filtering or multi-view conversion. The two most crucial deep learning techniques are PointNet [11], PointNet++ [12], and their enhanced algorithms that followed [13][14][15][16][17][18][19][20].…”
Section: Introductionmentioning
confidence: 99%
“…For the disorganized, unstructured, densely packed but sparsely distributed properties of point cloud data. The T-net method, point-by-point multi-layer perceptron, and channelby-channel maximum pooling method are used by PointNet [11] to learn the general properties of 3D point cloud data and to address the issue of disorder in this data. The algorithm itself has the flaw of being unable to determine what the local attributes of the point cloud are.…”
Section: Introductionmentioning
confidence: 99%
“…Method [19] proposes a two-stage multi-task network to focus on image contrast and local brightness during network training. Method [20] proposed a threelayer residual attention network to solve the interaction problem of different dimensional features in the middle layers of the network during training.…”
Section: Introductionmentioning
confidence: 99%