2015 49th Annual Conference on Information Sciences and Systems (CISS) 2015
DOI: 10.1109/ciss.2015.7086902
|View full text |Cite
|
Sign up to set email alerts
|

How is motion integrated into a proto-object based visual saliency model?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
22
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
3
2

Relationship

4
5

Authors

Journals

citations
Cited by 11 publications
(22 citation statements)
references
References 14 publications
0
22
0
Order By: Relevance
“…A feature map F is computed for each video frame based on bottom-up features. Here, we choose a lowlevel saliency map for F , based on motion, intensity, color and orientation [3], multiplied with a Gaussian anisotropic center bias (as suggested in [4]). The visual sensitivity map S depends on the current gaze position (x 0 , y 0 ) and determines the extent to which features can be perceived by the observer.…”
Section: Eye Movements As a Sequential Decision Making Taskmentioning
confidence: 99%
“…A feature map F is computed for each video frame based on bottom-up features. Here, we choose a lowlevel saliency map for F , based on motion, intensity, color and orientation [3], multiplied with a Gaussian anisotropic center bias (as suggested in [4]). The visual sensitivity map S depends on the current gaze position (x 0 , y 0 ) and determines the extent to which features can be perceived by the observer.…”
Section: Eye Movements As a Sequential Decision Making Taskmentioning
confidence: 99%
“…We do not implement temporal dynamics of neuronal activity because all our stimuli are static images. Other saliency models have been proposed that do include image motion, for instance, Molin et al ( 2015 ).…”
Section: Modelmentioning
confidence: 99%
“…A proto-object based saliency model was shown to predict eye fixations with good accuracy (Russell et al, 2014). While originally this model used information from maps of intensity, color, and orientation, it was later extended to additionally utilize motion, depth, and texture features (Hu et al, 2016; Mancinelli et al, 2018; Molin et al, 2015; Uejima et al, 2020), and also implemented in biofidelic neuromorphic hardware (Iacono et al, 2019).…”
Section: Introductionmentioning
confidence: 99%