2024
DOI: 10.1109/access.2024.3375360
|View full text |Cite
|
Sign up to set email alerts
|

Decomformer: Decompose Self-Attention of Transformer for Efficient Image Restoration

Eunho Lee,
Youngbae Hwang

Abstract: A transformer architecture achieves outstanding performance in computer vision tasks based on the ability to capture long-range dependencies. However, a quadratic increase in complexity with respect to spatial resolution makes it impractical to apply for image restoration tasks. In this paper, we propose a Decomformer that efficiently captures global relationship by decomposing self-attention into linear combination of vectors and coefficients to reduce the heavy computational cost. This approximation not only… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 50 publications
(112 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?