2013
DOI: 10.1167/13.9.144
|View full text |Cite
|
Sign up to set email alerts
|

Do low-level visual features have a causal influence on gaze during dynamic scene viewing?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…1 shows the architecture of proposed deep video saliency model. Inspired by classical human visual perception research [14], [15], which suggests both static and dynamic saliency cues contribute to video saliency, we design our model with two modules, simultaneously considering both the spatial and temporal characteristics of the scene.…”
Section: A Architecture Overviewmentioning
confidence: 99%
See 1 more Smart Citation
“…1 shows the architecture of proposed deep video saliency model. Inspired by classical human visual perception research [14], [15], which suggests both static and dynamic saliency cues contribute to video saliency, we design our model with two modules, simultaneously considering both the spatial and temporal characteristics of the scene.…”
Section: A Architecture Overviewmentioning
confidence: 99%
“…Another challenge for detecting saliency in dynamic scenarios derives from the natural demand of this task. As suggested by human visual perception research [14], [15], when computing dynamic saliency maps, video saliency models need to consider both the spatial and the temporal characteristics of the scene. We propose a deep video saliency model arXiv:1702.00871v3 [cs.CV] 9 Dec 2017 for producing spatiotemporal saliency via fully exploring both the static and dynamic saliency information.…”
Section: Introductionmentioning
confidence: 99%
“…Human visual perception [51], [52] suggest that basic visual features such as motion and edges are processed at the human pre-attentive stage for visual attention, which motivates us to combine spatial edge and motion boundary cues into a coalescent spatiotemporal edge map. Both color and motion discontinuities provide valuable evidence in predicting object boundaries.…”
Section: Spatiotemporal Edge Generationmentioning
confidence: 99%
“…Even before the era of deep learning, object motion has always been considered as an informative cue for automatic video object segmentation. This is largely inspired by the remarkable capability of motion perception in the human visual system (HVS) (Treisman and Gelade 1980;Mital et al 2013), which can quickly orient attentions towards moving objects in dynamic scenarios. In fact, human beings are more sensitive to moving objects than static ones, even if the static objects are strongly contrasted against their surroundings.…”
Section: Introductionmentioning
confidence: 99%