2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2014
DOI: 10.1109/icassp.2014.6853664
|View full text |Cite
|
Sign up to set email alerts
|

Using monocular depth cues for modeling stereoscopic 3D saliency

Abstract: International audienceSaliency is one of the most important features in human visual perception. It is widely used nowadays for perceptually optimizing image processing algorithms. Several models have been proposed for 2D images and only few attempts can be observed for 3D ones. In this paper, we propose a stereoscopic 3D saliency model relying on 2D saliency features jointly with depth obtained from monocular cues. On the one hand, the use of 2D saliency features is justified psychophysically by the similarit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2015
2015
2018
2018

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 19 publications
0
5
0
Order By: Relevance
“…Lang et al 52 extracted the depth information directly using a Kinect sensor and developed a probabilistic framework to measure the saliency probability at every depth level and combine the resulted depth saliency with existing 2D VAMs. Iatsun et al 76 Compared to 3D VAMs for stereoscopic images, saliency prediction for stereoscopic video has attracted significantly less attention, therefore leaving room for improvement. One reason might be due to the lack of publicly available benchmark stereoscopic video datasets.…”
Section: Overview Of the State-of-the-art 3d Vamsmentioning
confidence: 99%
See 1 more Smart Citation
“…Lang et al 52 extracted the depth information directly using a Kinect sensor and developed a probabilistic framework to measure the saliency probability at every depth level and combine the resulted depth saliency with existing 2D VAMs. Iatsun et al 76 Compared to 3D VAMs for stereoscopic images, saliency prediction for stereoscopic video has attracted significantly less attention, therefore leaving room for improvement. One reason might be due to the lack of publicly available benchmark stereoscopic video datasets.…”
Section: Overview Of the State-of-the-art 3d Vamsmentioning
confidence: 99%
“…Lang et al 52 extracted the depth information directly using a Kinect sensor and developed a probabilistic framework to measure the saliency probability at every depth level and combine the resulted depth saliency with existing 2D VAMs. Iatsun et al 76 derived the depth information from only monocular depth cues (i.e., only from one view of an image) and incorporated the resulting artificial depth in conjunction with an existing 2D VAM. Wang et al 51 generated a depth saliency map using a Bayesian approach to be combined with existing 2D VAMs.…”
Section: Overview Of the State-of-the-art 3d Vamsmentioning
confidence: 99%
“…Most existing approaches for 3D saliency detection either treat the depth feature as an indicator to weight the RGB saliency map [15][16][17][18] or consider the 3D saliency map as the fusion of saliency maps of these low-level features [19][20][21][22]. It is not clear how to integrate 2D saliency features with depth-induced saliency feature in a better way, and linearly combining the saliency maps produced by these features cannot guarantee better results.…”
Section: Related Workmentioning
confidence: 99%
“…Iatsun et al proposed a 3D saliency model by relying on 2D saliency features jointly with depth obtained from monocular cues, in which 3D perception is significantly based on monocular cues [18]. The models in this category combine 2D features with a depth feature to calculate the final saliency map, but they do not include the depth saliency map in their computation processes.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation