2016
DOI: 10.1167/16.9.9
|View full text |Cite
|
Sign up to set email alerts
|

Action adaptation during natural unfolding social scenes influences action recognition and inferences made about actor beliefs

Abstract: When observing another individual's actions, we can both recognize their actions and infer their beliefs concerning the physical and social environment. The extent to which visual adaptation influences action recognition and conceptually later stages of processing involved in deriving the belief state of the actor remains unknown. To explore this we used virtual reality (life-size photorealistic actors presented in stereoscopic three dimensions) to see how visual adaptation influences the perception of individ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 73 publications
(125 reference statements)
0
3
0
Order By: Relevance
“…VR was found to be acceptable and participants perceived avatars as real (Keefe et al, 2016;Ku et al, 2005;Oker et al, 2015). One study had a very large sample (N ¼ 333) and is likely to have more robust findings in terms of indicating the acceptability of the VR (Keefe et al, 2016). Studies found that clinical samples had impairments related to eye gaze (Bekele et al, 2017;Caruana et al, 2019), impaired theory of mind (Canty et al, 2017), performed more slowly and made more errors during recognition and interaction tasks (Ventura et al, 2020), had higher intimacy for distant avatars (Park et al, 2014), and especially had difficulties in emotion recognition and related facial and social cues (Berrada-Baby et al, 2016;Dyck et al, 2010;Gutierrez-Maldonado et al, 2012;Kim, Jung, et al, 2009;Kim et al, 2005;Kim et al, 2007;Marcos-Pablos et al, 2016;Song et al, 2015;Thirioux et al, 2014).…”
Section: Immersive 2d Screen Studiesmentioning
confidence: 97%
See 1 more Smart Citation
“…VR was found to be acceptable and participants perceived avatars as real (Keefe et al, 2016;Ku et al, 2005;Oker et al, 2015). One study had a very large sample (N ¼ 333) and is likely to have more robust findings in terms of indicating the acceptability of the VR (Keefe et al, 2016). Studies found that clinical samples had impairments related to eye gaze (Bekele et al, 2017;Caruana et al, 2019), impaired theory of mind (Canty et al, 2017), performed more slowly and made more errors during recognition and interaction tasks (Ventura et al, 2020), had higher intimacy for distant avatars (Park et al, 2014), and especially had difficulties in emotion recognition and related facial and social cues (Berrada-Baby et al, 2016;Dyck et al, 2010;Gutierrez-Maldonado et al, 2012;Kim, Jung, et al, 2009;Kim et al, 2005;Kim et al, 2007;Marcos-Pablos et al, 2016;Song et al, 2015;Thirioux et al, 2014).…”
Section: Immersive 2d Screen Studiesmentioning
confidence: 97%
“…shopping, getting a bus, navigating the environment. VR was found to be acceptable and participants perceived avatars as real (Keefe et al, 2016;Ku et al, 2005;Oker et al, 2015). One study had a very large sample (N ¼ 333) and is likely to have more robust findings in terms of indicating the acceptability of the VR (Keefe et al, 2016).…”
Section: Immersive 2d Screen Studiesmentioning
confidence: 98%
“…There is some evidence that probing action recognition under more naturalistic conditions provides results that differ from those obtained with standard psychophysical setups. For example, Keefe, Wincenciak, Jellema, Ward, and Barraclough (2016) used life-size photo-realistic actors presented three-dimensionally and their results indicate that complex judgments about the actors (the actor’s expectation about the weight of a box to be lifted) are different depending on whether participant view the stimuli on large-scale compared to small-scale screens. Moreover, action recognition performance in the visual periphery as probed by life-size human stick figures (see example in Figure 3 ) is different from action recognition performance probed by small point-light humans.…”
Section: Introductionmentioning
confidence: 99%