Decomformer: Decompose Self-Attention of Transformer for Efficient Image Restoration
Eunho Lee,
Youngbae Hwang
Abstract:A transformer architecture achieves outstanding performance in computer vision tasks based on the ability to capture long-range dependencies. However, a quadratic increase in complexity with respect to spatial resolution makes it impractical to apply for image restoration tasks. In this paper, we propose a Decomformer that efficiently captures global relationship by decomposing self-attention into linear combination of vectors and coefficients to reduce the heavy computational cost. This approximation not only… Show more
Set email alert for when this publication receives citations?
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.