2021
DOI: 10.31234/osf.io/fna9z
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Look at what I can do: Object affordances guide visual attention while speakers describe potential actions

Abstract: As we act on the world around us, our eyes seek out objects we plan to interact with. A growing body of evidence suggests that overt visual attention selects objects in the environment that could be interacted with, even when the task precludes physical interaction. Our previous work showed objects that afford grasping interactions influenced attention when static scenes depicted reachable spaces, and attention was otherwise better explained by general meaning (Rehrig, Peacock, et al., 2021). Because grasping … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…As in recent work (Nuthmann et al., 2017; Rehrig et al., 2022; van Renswoude, Visser, et al., 2019), we used generalized linear mixed‐effect models (GLMMs) with binomial link functions (i.e., logistic regression) to model how the likelihood of looking depended on the visual features of gazed and un‐gazed locations. Analyses were conducted in R (R Core Team, 2018) using the lme4 package (Bates et al., 2014).…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…As in recent work (Nuthmann et al., 2017; Rehrig et al., 2022; van Renswoude, Visser, et al., 2019), we used generalized linear mixed‐effect models (GLMMs) with binomial link functions (i.e., logistic regression) to model how the likelihood of looking depended on the visual features of gazed and un‐gazed locations. Analyses were conducted in R (R Core Team, 2018) using the lme4 package (Bates et al., 2014).…”
Section: Resultsmentioning
confidence: 99%
“…Next, we randomly selected a second circular region of the same size from within the video frame (looking = 0) that was not permitted to overlap the gazed region, as in Rehrig et al. (2022). We then calculated saliency, centering, and face presence for the un‐gazed location.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation