2016
DOI: 10.7717/peerj.2796
|View full text |Cite
|
Sign up to set email alerts
|

Age-congruency and contact effects in body expression recognition from point-light displays (PLD)

Abstract: Recognition of older people’s body expressions is a crucial social skill. We here investigate how age, not just of the observer, but also of the observed individual, affects this skill. Age may influence the ability to recognize other people’s body expressions by changes in one’s own ability to perform certain action over the life-span (i.e., an own-age bias may occur, with best recognition for one’s own age). Whole body point light displays of children, young adults and older adults (>70 years) expressing six… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
5
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 81 publications
1
5
1
Order By: Relevance
“…If humans look at dogs as if they were humans, they probably miss important cues. Age-related changes in the perception of emotional cues have been widely reported for both facial (Sullivan et al 2017) and bodily expressions (Pollux et al 2016). While several studies have shown a general decline in emotion (Kret and Gelder 2012;Sullivan et al 2017), in our study sex or age of viewers had little impact on ECA.…”
Section: Discussioncontrasting
confidence: 59%
See 1 more Smart Citation
“…If humans look at dogs as if they were humans, they probably miss important cues. Age-related changes in the perception of emotional cues have been widely reported for both facial (Sullivan et al 2017) and bodily expressions (Pollux et al 2016). While several studies have shown a general decline in emotion (Kret and Gelder 2012;Sullivan et al 2017), in our study sex or age of viewers had little impact on ECA.…”
Section: Discussioncontrasting
confidence: 59%
“…(5) Do dogs and humans visually inspect emotionally expressive individuals (dog and human) in the same way? Although there is no study of human gaze allocation at the full body of dogs, based on previous research focusing on facial expressions, we predicted that human gaze would be affected by both the viewed species and emotional expressions (Guo et al 2019;Correia-Caeiro et al 2020) and that both age and sex would modulate gaze patterns at least at human bodies (Pollux et al 2016). We also predicted that the dog gaze would be affected by the viewed expressions and species since they are able to discriminate and recognise (at least some) prototypical facial expressions (Barber et al 2016;Correia-Caeiro et al 2020).…”
Section: Introductionmentioning
confidence: 99%
“…Alaerts et al (2011) found that angry PLDs were the easiest to recognize. However, the superiority of identifying happy PLDs in the present study is consistent with most previous sets, regardless of the number of forced-choice options (Atkinson et al, 2004;Halovic & Kroos, 2018a, 2018bLee & Kim, 2017;Pollux et al, 2016;Walk & Homan, 1984). Happiness had the highest subjective and objective movements and emotional intensity.…”
Section: Discussionsupporting
confidence: 91%
“…The present validation study showed that recognition accuracy for each emotion and each view in the DEMOS were relatively high and significantly higher than the chance level (0.167). At first glance, mean recognition accuracies in the DEMOS were between 0.414 and 0.776, that seemed to be lower than that of PLDs in Atkinson et al (2004) (0.6306-0.8417), Walk and Homan (1984) (71-96%), and Ross et al (2012) (overall 81.1% in adults), but comparable to or higher than Pollux et al (2016) (40-90% in young adults), Alaerts et al (2011) (44.2-58.6%), and Halovic and Kroos (2018b) (9-26%). It should also be noted that the number of options provided for participants often depends on the kind of emotions in their studies, resulting in different chance levels.…”
Section: Discussionmentioning
confidence: 86%
“…To verify whether motor resonance or automatic imitation effects based on gender arise, we manipulated the gender of the human characters, while the gender of the humanoid robot was not manipulated. To facilitate age resonance for pre-adolescents we selected the pictures of a boy or of a girl (Liuzza, Setti & Borghi, 2011;Pollux, Hermens, & Willmott, 2016). Overall, we had six possible Protagonist-Other combinations: Girl-Boy; Boy-Girl; Girl-Robot; Robot-Girl; Boy-Robot; Robot-Boy.…”
Section: Visual Stimulimentioning
confidence: 99%