2010
DOI: 10.1007/978-3-642-12304-7_19
|View full text |Cite
|
Sign up to set email alerts
|

Visual Saliency Based Object Tracking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
19
0

Year Published

2010
2010
2019
2019

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 34 publications
(19 citation statements)
references
References 13 publications
0
19
0
Order By: Relevance
“…While psychologists and physiologists are interested in human visual attention behavior and anatomical evidence to support attention theory, computer scientists are concentrating on building computational models of visual attention that implement visual saliency in computers or machines. Computational visual attention has many applications for computer vision tasks, such as robot localization (Shubina & Tsotsos, 2010;Siagian & Itti, 2009), object tracking (G. Zhang, Yuan, Zheng, Sheng, & Liu, 2010), image/video compression (Guo & Zhang, 2010;Itti, 2004), object detection (Frintrop, 2006;Liu et al, 2011), image thumbnailing (Le Meur, Le Callet, Barba, & Thoreau, 2006;Marchesotti, Cifarelli, & Csurka, 2009), and implementation of smart cameras (Casares, Velipasalar, & Pinto, 2010).…”
Section: Introductionmentioning
confidence: 99%
“…While psychologists and physiologists are interested in human visual attention behavior and anatomical evidence to support attention theory, computer scientists are concentrating on building computational models of visual attention that implement visual saliency in computers or machines. Computational visual attention has many applications for computer vision tasks, such as robot localization (Shubina & Tsotsos, 2010;Siagian & Itti, 2009), object tracking (G. Zhang, Yuan, Zheng, Sheng, & Liu, 2010), image/video compression (Guo & Zhang, 2010;Itti, 2004), object detection (Frintrop, 2006;Liu et al, 2011), image thumbnailing (Le Meur, Le Callet, Barba, & Thoreau, 2006;Marchesotti, Cifarelli, & Csurka, 2009), and implementation of smart cameras (Casares, Velipasalar, & Pinto, 2010).…”
Section: Introductionmentioning
confidence: 99%
“…In [44], the authors developed a bottom-up saliency tracker that tracked any salient target in the scene, which was done in an entirely unsupervised manner. This is in contrast to the typical online tracking problem, in which a particular target is to be followed throughout a video.…”
Section: Related Deep Convnet and Saliency Trackersmentioning
confidence: 99%
“…7-9), and they have been used to address problems such as human scene analysis, 10 video compression, 11 and object tracking. 12 Saliency maps from different labs differ in how they incorporate image and observer characteristics (e.g., color and intensity contrast, 13 visual motion, 14 and biological models 15 ). Comparing the success with which various salience models predict eye movements is one source of information describing the image information that is used in observers' search strategies.…”
Section: Introductionmentioning
confidence: 99%