2017
DOI: 10.1016/bs.pbr.2017.07.001
|View full text |Cite
|
Sign up to set email alerts
|

Learning features in a complex and changing environment: A distribution-based framework for visual attention and vision in general

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
22
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
6
1
1

Relationship

5
3

Authors

Journals

citations
Cited by 23 publications
(24 citation statements)
references
References 116 publications
2
22
0
Order By: Relevance
“…In other words, we argue that the peripheral visual system constructed a representation of the distractor distribution in both experiments; however, this representation was less precise in Experiment 2, where more weight was given to the mean of the distribution in building the representation of the distribution, rather than its other features. Such an explanation is consistent with a distribution-based framework of visual attention (see Chetverikov et al [2017c] for a review).…”
Section: Ensemble Coding With Central Versus Peripheral Visionsupporting
confidence: 80%
See 1 more Smart Citation
“…In other words, we argue that the peripheral visual system constructed a representation of the distractor distribution in both experiments; however, this representation was less precise in Experiment 2, where more weight was given to the mean of the distribution in building the representation of the distribution, rather than its other features. Such an explanation is consistent with a distribution-based framework of visual attention (see Chetverikov et al [2017c] for a review).…”
Section: Ensemble Coding With Central Versus Peripheral Visionsupporting
confidence: 80%
“…These explicit judgments on statistical parameters of a feature distribution might have limited power in revealing how accurately feature distributions in an ensemble are encoded by the visual system. Recently, Chetverikov, Campana, and Kristjánsson (2016 , 2017a , 2017b , 2017c , 2020 ) used a novel method to demonstrate that observers can encode the probability density function underlying the distractor distribution in an odd-one-out visual search task for orientation (2016, 2017a) and color (2017b). Instead of using explicit judgments of distribution statistics, they measured observers’ visual search times varying target similarity to previously learned distractors, which revealed observers’ expectations of distractor feature distributions.…”
Section: Introductionmentioning
confidence: 99%
“…We simulated the predictions from three models (Figure 2E-F; the simulation code is available at https://osf.io/rg2h8). For our main model of interest, the "bimodal" model, we assumed that the probabilities of different distractors can be represented by two Gaussian templates (for simplicity, we ignore the fact that the stimuli distributions might be more accurately represented by non-Gaussian templates (Chetverikov, Campana, & Kristjánsson, 2017a)) centered on the means of distractor distribution segments. We assumed that observers utilize the knowledge they obtained about distractors and targets optimally.…”
Section: Probabilistic Rejection Templates In Visual Working Memorymentioning
confidence: 99%
“…Not only do we seem to be able to generate strong incidental representations, but the memories we have gathered on the fly, during natural interactions, might in fact be critical for proactively guiding our behavior. Chetverikov, Campana, and Kristjánsson (2017a) have shown how repeated searching within search arrays with particular feature distributions of orientation or color (Chetverikov et al, 2017c;Tanrikulu, Chetverikov, & Kristjánsson, 2020) enables observers to learn the probabilities of feature values and build up a probabilistic template of the set for distractor rejection (Chetverikov, Campana, & Kristjánsson, 2020a). Using a repeated-search task, Võ and Wolfe (2012) demonstrated that attentional guidance by memories from previous encounters was more effective if these memories were established when looking for an item (during search), compared to looking at targets (explicit memorization and free viewing).…”
Section: Building and Using Behaviorally Optimal Long-term Representamentioning
confidence: 99%