2024
DOI: 10.1049/cit2.12402
|View full text |Cite
|
Sign up to set email alerts
|

Laplacian attention: A plug‐and‐play algorithm without increasing model complexity for vision tasks

Xiaolei Chen,
Yubing Lu,
Runyu Wen

Abstract: Most prevailing attention mechanism modules in contemporary research are convolution‐based modules, and while these modules contribute to enhancing the accuracy of deep learning networks in visual tasks, they concurrently augment the overall model complexity. To address the problem, this paper proposes a plug‐and‐play algorithm that does not increase the complexity of the model, Laplacian attention (LA). The LA algorithm first calculates the similarity distance between feature points in the feature space and f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 20 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?