2018
DOI: 10.1007/s11042-018-6591-3
|View full text |Cite
|
Sign up to set email alerts
|

Self-attention recurrent network for saliency detection

Abstract: Feature maps in deep neural network generally contain different semantics. Existing methods often omit their characteristics that may lead to sub-optimal results. In this paper, we propose a novel end-to-end deep saliency network which could effectively utilize multi-scale feature maps according to their characteristics. Shallow layers often contain more local information, and deep layers have advantages in global semantics. Therefore, the network generates elaborate saliency maps by enhancing local and global… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
8
2

Relationship

1
9

Authors

Journals

citations
Cited by 14 publications
(13 citation statements)
references
References 44 publications
0
13
0
Order By: Relevance
“…To further reduce the accumulated errors in the transmission of semantic information between layers, this paper adopts the self-attention model [19,20] to capture the spatial relationship between long-distance named entities in sports text. Because of the use of the transformer framework in encoding and decoding conversions, the self-attention mechanism model not only effectively solves the long-distance dependency problem of cyclic neural networks (RNN) [21,22], but also improves the overall operation efficiency of the model.…”
Section: Methodsmentioning
confidence: 99%
“…To further reduce the accumulated errors in the transmission of semantic information between layers, this paper adopts the self-attention model [19,20] to capture the spatial relationship between long-distance named entities in sports text. Because of the use of the transformer framework in encoding and decoding conversions, the self-attention mechanism model not only effectively solves the long-distance dependency problem of cyclic neural networks (RNN) [21,22], but also improves the overall operation efficiency of the model.…”
Section: Methodsmentioning
confidence: 99%
“…The corresponding saliency maps are generated by Ref. Sun et al (2018). The parameter θ is the saliency guided seeded region growing method is set to 10.…”
Section: Experiments Settingsmentioning
confidence: 99%
“…The attention mechanism refers to selectively focusing on some certain useful visual information and ignoring other parts, which substantially is a weighted sharing idea. This idea applies widely in the computer vision tasks, especially for semantic segmentation [18], [19], saliency detection [20], [21], image caption [22], [23] and so on. The crowd counting task often presents the phenomenon of the uneven-distributed crowd, and thus this paper designs a Spatial Attention Conversion module for extracting crowd spatial feature as an accessory module to extract useful information and adds it to the method, on the basis of the attention mechanism.…”
Section: B Attention Mechanismmentioning
confidence: 99%