2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00888
|View full text |Cite
|
Sign up to set email alerts
|

SID4VAM: A Benchmark Dataset With Synthetic Images for Visual Attention Modeling

Abstract: A benchmark of saliency models performance with a synthetic image dataset is provided. Model performance is evaluated through saliency metrics as well as the influence of model inspiration and consistency with human psychophysics. SID4VAM is composed of 230 synthetic images, with known salient regions. Images were generated with 15 distinct types of low-level features (e.g. orientation, brightness, color, size...) with a target-distractor popout type of synthetic patterns. We have used Free-Viewing and Visual … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
11
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
2
1
1
1

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(13 citation statements)
references
References 54 publications
0
11
0
Order By: Relevance
“…On behalf of model biological plasusibility on 72 V1 function and its computations, we present a unified model of lateral connections in V1, able to predict attention 73 (both in free-viewing and visual search) from real and synthetic color images while mimicking physiological properties 74 of the neural circuitry stated previously. 75 76 77 The HVS perceives the light at distinct wavelengths of the visual spectrum and separates them to distinct channels 78 for further processing in the cortex. First, retinal photoreceptors (or RP, corresponding to rod and cone cells) are 79 photosensitive to luminance (rhodopsin-pigmented) and color (photopsin-pigmented) [41,42].…”
Section: Introductionmentioning
confidence: 99%
See 4 more Smart Citations
“…On behalf of model biological plasusibility on 72 V1 function and its computations, we present a unified model of lateral connections in V1, able to predict attention 73 (both in free-viewing and visual search) from real and synthetic color images while mimicking physiological properties 74 of the neural circuitry stated previously. 75 76 77 The HVS perceives the light at distinct wavelengths of the visual spectrum and separates them to distinct channels 78 for further processing in the cortex. First, retinal photoreceptors (or RP, corresponding to rod and cone cells) are 79 photosensitive to luminance (rhodopsin-pigmented) and color (photopsin-pigmented) [41,42].…”
Section: Introductionmentioning
confidence: 99%
“…'s work [7], the IoR can be applied to static saliency models by substracting the accumulated inhibitory map to the 256 saliency map during each gaze (̂ − { } ). ) and 230 psychophysical images (SID4VAM [17,75]). Generically, experimentation for these type 262 of datasets [76] capture fixations from about 5 to 55 subjects, looking at a monitor inside a luminance controlled 263 room while being restrained with a chin rest, located at a relative distance of 30-40 pixels per degree of visual angle 264 ( ).…”
mentioning
confidence: 99%
See 3 more Smart Citations