2019
DOI: 10.1609/aaai.v33i01.33019340
|View full text |Cite
|
Sign up to set email alerts
|

Deep Embedding Features for Salient Object Detection

Abstract: Benefiting from the rapid development of Convolutional Neural Networks (CNNs), some salient object detection methods have achieved remarkable results by utilizing multi-level convolutional features. However, the saliency training datasets is of limited scale due to the high cost of pixel-level labeling, which leads to a limited generalization of the trained model on new scenarios during testing. Besides, some FCN-based methods directly integrate multi-level features, ignoring the fact that the noise in some fe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 35 publications
(17 citation statements)
references
References 35 publications
0
17
0
Order By: Relevance
“…In contrast to that, our method polishes the representations at every level with multi-level context information at each step. Moreover, many methods utilize an extra refinement module, either as a part of their model or as a post-process, to further recover the details of the predicted results, such as DenseCRF (Hou et al 2017;Liu, Han, and Yang 2018), BRN (Wang et al 2018) and GFRN (Zhuge, Zeng, and Lu 2019). In contrast, our method delivers superior performance without such modules.…”
Section: Refinement On Saliency Mapmentioning
confidence: 98%
See 2 more Smart Citations
“…In contrast to that, our method polishes the representations at every level with multi-level context information at each step. Moreover, many methods utilize an extra refinement module, either as a part of their model or as a post-process, to further recover the details of the predicted results, such as DenseCRF (Hou et al 2017;Liu, Han, and Yang 2018), BRN (Wang et al 2018) and GFRN (Zhuge, Zeng, and Lu 2019). In contrast, our method delivers superior performance without such modules.…”
Section: Refinement On Saliency Mapmentioning
confidence: 98%
“…Wang (Wang et al 2018) propose to better localize salient objects by exploiting contextual information with attentive mechanism. Zhuge (Zhuge, Zeng, and Lu 2019) employ a structure which embeds prior information to generate attentive features and filter out cluttered information. Different from above methods which design sophisticated structure to make information fusion, we use simple structure to polish multi-level features in recurrent manner and in parallel.…”
Section: Feature Integrationmentioning
confidence: 99%
See 1 more Smart Citation
“…Liu et al (2019) built two pooling-based modules, one module provided different layers the location information of potential salient objects, while the other merged features. Zhuge, Zeng, and Lu (2019) transformed prior information into an embedding space to select attentive features and to filter out outliers. Feng, Lu, and Ding (2019) proposed a boundary-enhanced loss for learning exquisite object boundaries.…”
Section: Related Work Skip Connection Structurementioning
confidence: 99%
“…RADF (Hu et al 2018) use recurrently aggregated deep features to detect saliency object. Zhuge et al (Zhuge, Zeng, and Lu 2019) argue that the noise in some features are harmful to saliency detection. PiCANet (Liu, Han, and Yang 2018), RAS (Chen et al 2018), and PFA (Zhao and Wu 2019)) both adopt the attention mechanism to get better saliency result.…”
Section: Related Workmentioning
confidence: 99%