2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2017
DOI: 10.1109/cvpr.2017.683
|View full text |Cite
|
Sign up to set email alerts
|

Residual Attention Network for Image Classification

Abstract: In this work, we propose "Residual Attention Network", a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
1,761
0
2

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 3,284 publications
(1,888 citation statements)
references
References 54 publications
1
1,761
0
2
Order By: Relevance
“…Second, by improving the network design and loss function to prevent the removal of subtle true‐positive signals. The loss function might be further constrained to prevent removal of the true high‐intensity signals and the network design might be further advanced to learn more complex spatial patterns such as using more residual connections or different modules such as the attention mechanism …”
Section: Discussionmentioning
confidence: 99%
“…Second, by improving the network design and loss function to prevent the removal of subtle true‐positive signals. The loss function might be further constrained to prevent removal of the true high‐intensity signals and the network design might be further advanced to learn more complex spatial patterns such as using more residual connections or different modules such as the attention mechanism …”
Section: Discussionmentioning
confidence: 99%
“…This function helps to filter out unimportant information and improve the efficiency of information processing. In deep learning, this idea has been explicitly simulated as “Attention.” Attention mechanisms are widely used in natural language processing, computer vision, and other fields.…”
Section: Related Workmentioning
confidence: 99%
“…Attention, or so called saliency detection, has been shown to play an important role in a wide variety of computer vision and robotics tasks [4,6,[23][24][25]. Despite their different application scenarios, such works utilize a neural network to learn to automatically locate task-relevant regions.…”
Section: B Attention Model For Place Recognitionmentioning
confidence: 99%
“…There have been attempts in utilizing attention in many tasks, such as image classification [4], image retrieval [5] or segmentation [6], while attention can also play an important role in visual place recognition [7]. Visual cues that are relevant to place recognition are generally not uniformly distributed across an image, therefore focusing on important regions, as opposed to irrelevant or confusion areas, is key to improve the place recognition performance.…”
mentioning
confidence: 99%