2018
DOI: 10.1016/j.neucom.2017.10.052
|View full text |Cite
|
Sign up to set email alerts
|

Stereoscopic saliency model using contrast and depth-guided-background prior

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 63 publications
(32 citation statements)
references
References 43 publications
0
31
0
1
Order By: Relevance
“…We compare our proposed algorithm with 10 state-of-the-art RGB-D saliency detection models, including ACSD [ 26 ], DESM [ 12 ], LHM [ 13 ], GP [ 27 ], DCMC [ 37 ], LBE [ 28 ], SE [ 16 ], CDCP [ 18 ], CDB [ 24 ], and DTM [ 38 ]. For fair comparison, we employ saliency maps provided by the [ 51 ].…”
Section: Experiments and Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…We compare our proposed algorithm with 10 state-of-the-art RGB-D saliency detection models, including ACSD [ 26 ], DESM [ 12 ], LHM [ 13 ], GP [ 27 ], DCMC [ 37 ], LBE [ 28 ], SE [ 16 ], CDCP [ 18 ], CDB [ 24 ], and DTM [ 38 ]. For fair comparison, we employ saliency maps provided by the [ 51 ].…”
Section: Experiments and Discussionmentioning
confidence: 99%
“…Furthermore, existing RGB-D saliency detection models mainly use depth information in two ways. One is based on depth features [ 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 ], which focuses on taking depth information as an explicit supplementary feature of color features. In [ 12 ], Cheng et al calculate the saliency map with additional depth information through color contrast, depth contrast, and spatial bias extended from 2D to 3D, which also proves that depth information is beneficial to visual saliency analysis in complex scenes.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…To quantify the performance of different models, we conducted a comprehensive evaluation of 24 representative RGB-D based salient object detection models, including nine traditional methods: LHM [51], ACSD [56], DESM [49], GP [50], LBE [57], DCMC [36], SE [37], CDCP [84], CDB [95], and fifteen deep learning-based methods: DF [52], PCF [92], CTMF [58], CPFP [53], TANet [103], AFNet [106], MMCI [55], DMRA [54], D 3 Net [38], SSF [39], A2dele [40], S 2 MA [41], ICNet [42], JL-DCF [43], and UC-Net [44]. We report the mean values of S α and MAE across the five datasets (STERE [139], NLPR [51] , LFSD [140], DES [49], and SIP [38]) for each model in Fig.…”
Section: Overall Evaluationmentioning
confidence: 99%
“…Traditional RGB-D saliency models usually rely on hand-crafted features to distinguish salient objects in given images. Existing widely-used hand-crafted features including contrast [28,38,39], compactness [39,40], center-surround difference [41,42], center or boundary prior [43,44], background enclosure [32], and various fused saliency measures [29]. In Ref.…”
Section: Traditionalmentioning
confidence: 99%