2005
DOI: 10.1016/j.cviu.2004.10.009
|View full text |Cite
|
Sign up to set email alerts
|

Assessing the contribution of color in visual attention

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
65
1

Year Published

2005
2005
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 121 publications
(68 citation statements)
references
References 21 publications
(15 reference statements)
0
65
1
Order By: Relevance
“…Recently the models have been used to boost some computer vision and pattern recognition techniques such as object detection [113], [124], [125], object recognition [126]- [131], action recognition [132], [133], segmentation [37], [114], [115], [134], [135] and background subtraction [136]. Besides, specific applications include video summarization [137] and compression [138], scene understanding [139]- [141], computer-human interaction [98], [142]- [147], robotics [132], [148]- [150], and driver assistance [151], [152]. The potential of the visual attention models that are capable of extracting important regions promises their contributions to many other domains.…”
Section: Discussionmentioning
confidence: 99%
“…Recently the models have been used to boost some computer vision and pattern recognition techniques such as object detection [113], [124], [125], object recognition [126]- [131], action recognition [132], [133], segmentation [37], [114], [115], [134], [135] and background subtraction [136]. Besides, specific applications include video summarization [137] and compression [138], scene understanding [139]- [141], computer-human interaction [98], [142]- [147], robotics [132], [148]- [150], and driver assistance [151], [152]. The potential of the visual attention models that are capable of extracting important regions promises their contributions to many other domains.…”
Section: Discussionmentioning
confidence: 99%
“…Then, a comparison of these two models with the whole set of human fixation patterns was performed in order to obtain the respective scores. Note that the score s was computed taking the first 5 fixations of each subject into account, since it has been suggested that, with regard to human observers, initial fixations are controlled mainly in a bottom-up manner [10]. Figure 4 shows the scores for the different individual images.…”
Section: Performance In Presence Of 2d Imagesmentioning
confidence: 99%
“…The eye tracking data was parsed for fixations and saccades in real time, using parsing parameters proven to be useful for cognitive research thanks to the reduction of detected microsaccades and short fixations (< 100 ms). Remaining saccades with amplitudes less than 20 pixels (0.75 o visual angle) as well as fixations shorter than 120 ms were discarded after-wards [10].…”
Section: Eye Movement and Fixation Pattern Recordingmentioning
confidence: 99%
See 1 more Smart Citation
“…As with the Human Visual System, the fixation frequency map was next filtered by a spatial Gaussian filter. These frequency maps were filtered by a spatial Gaussian filter of r = 37, which was chosen to approximate the size of the viewing field corresponding to the fovea in the gaze map [48]. The size of the Gaussian window was of 40 9 40 pixels.…”
Section: Comparisons Between Gaze Maps and Saliency Mapsmentioning
confidence: 99%