2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015
DOI: 10.1109/cvpr.2015.7298938
|View full text |Cite
|
Sign up to set email alerts
|

Deep networks for saliency detection via local estimation and global search

Abstract: This paper presents a saliency detection algorithm by integrating both local estimation and global search. In the local estimation stage, we detect local saliency by using a deep neural network (DNN-L) which learns local patch features to determine the saliency value of each pixel. The estimated local saliency maps are further refined by exploring the high level object concepts. In the global search stage, the local saliency map together with global contrast and geometric information are used as global feature… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
392
0
1

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 602 publications
(393 citation statements)
references
References 36 publications
0
392
0
1
Order By: Relevance
“…Pan et al [22] also trained two architectures on SALICON in an end-to-end manner: a shallow convnet trained from scratch, and a deeper one whose first three layers were adapted from the VGG network (SalNet). Other saliency models based on deep learning have been proposed for salient region detection [23,24,25,26]. In this paper, we focus on predicting eye fixations rather than detecting and segmenting salient objects in scenes.…”
Section: Related Workmentioning
confidence: 99%
“…Pan et al [22] also trained two architectures on SALICON in an end-to-end manner: a shallow convnet trained from scratch, and a deeper one whose first three layers were adapted from the VGG network (SalNet). Other saliency models based on deep learning have been proposed for salient region detection [23,24,25,26]. In this paper, we focus on predicting eye fixations rather than detecting and segmenting salient objects in scenes.…”
Section: Related Workmentioning
confidence: 99%
“…However, TBS method uses object information. The same or similar [32], CA [33], CB [34], DRFI [25], DSR [35], FT [23], GC [36], GS [37], HM [38], HS [7], LEGS [10], LRR [39], MC [40], MCDL [9], MR [24], PCA [41], BD [42], RC [14], RFCN [11], SBF [43], SEG [44], SF [45], SMD [46], MDF [8], SS [47], SVO [17], TD [48], and DBS. objects can ensure similar salient regions to some extent.…”
Section: Experiments Of Aggregation Methodmentioning
confidence: 99%
“…OBS method was compared with 11 state-of-the-art methods, including FT [23], RC [14], SF [45], HS [7], MR [24], DRFI [25], GC [36], MC [40], BD [42], MDF [8], and LEGS [10].…”
Section: Experiments On State-of-the-art Datasetsmentioning
confidence: 99%
See 2 more Smart Citations