2010
DOI: 10.3758/brm.42.1.168
|View full text |Cite
|
Sign up to set email alerts
|

Inferring intentions from biological motion: A stimulus set of point-light communicative interactions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

1
115
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 96 publications
(116 citation statements)
references
References 47 publications
1
115
0
Order By: Relevance
“…This bias may be due to historical accident (walking was the first action investigated using point-light stimuli; Johansson, 1973), as well as to the mathematical tractability of the combined joint movements in spatial-temporal space (Cutting, 1978;Troje, 2002). However, humans are able to perceive a wide range of action categories other than walking when presented as point-light displays (e.g., boxing, dancing, jumping jacks; Brown et al, 2005;Dittrich, 1993;Dittrich et al, 1996;Giese & Lappe, 2002;Ma, Paterson, & Pollick, 2006;Norman, Payton, Long, & Hawkes, 2004;Thurman & Grossman, 2008) and communicative interactions (Manera, Schouten, Becchio, Bara, & Verfaillie, 2010;Neri, Luu, & Levi, 2006). Comparing human performance for these different actions may help us understand how the visual system conducts an intelligent spatial-temporal analysis for action recognition (Giese & Lappe, 2002;Thurman & Grossman, 2008).…”
Section: Introductionmentioning
confidence: 95%
“…This bias may be due to historical accident (walking was the first action investigated using point-light stimuli; Johansson, 1973), as well as to the mathematical tractability of the combined joint movements in spatial-temporal space (Cutting, 1978;Troje, 2002). However, humans are able to perceive a wide range of action categories other than walking when presented as point-light displays (e.g., boxing, dancing, jumping jacks; Brown et al, 2005;Dittrich, 1993;Dittrich et al, 1996;Giese & Lappe, 2002;Ma, Paterson, & Pollick, 2006;Norman, Payton, Long, & Hawkes, 2004;Thurman & Grossman, 2008) and communicative interactions (Manera, Schouten, Becchio, Bara, & Verfaillie, 2010;Neri, Luu, & Levi, 2006). Comparing human performance for these different actions may help us understand how the visual system conducts an intelligent spatial-temporal analysis for action recognition (Giese & Lappe, 2002;Thurman & Grossman, 2008).…”
Section: Introductionmentioning
confidence: 95%
“…graspingfor-drinking vs. grasping-for-cleaning, Iacoboni et al, 2005), even in conditions when the final part of the action is hidden from view (Umiltà et al, 2001). Similarly, behavioral experiments have shown that an agent's intention (e.g., to deceive) or affective state (e.g., happiness) can be reliably communicated to external observers in video or point-light depictions of actions, such as lifting a box (Grèzes, Frith, & Passingham, 2004;Runeson & Frykholm, 1981, 1983, basketball passing (Sebanz & Shiffrar, 2009), and in situations depicting various whole-body expressive gestures and movements, such as pointing (Manera, Schouten, Becchio, Bara, & Verfaillie, 2010), communicating (Clarke, Bradshaw, Field, Hampson, & Rose, 2005), walking (Chouchourelou, Matsuka, Harber, & Shiffrar, 2006;Roether, Omlor, Christensen, & Giese, 2009), or dancing (Dittrich, Troscianko, Lea, & Morgan, 1996).…”
Section: Introductionmentioning
confidence: 98%
“…For human actions, in perception research this subject matter is also often found in the context of "biological motion", with databases of motion capture data including [9], [10]. Another well-known database is the CMU Graphics Lab Motion Capture Database (available at mocap.cs.cmu.edu), which contains motion capture data of various actions in categories such as locomotions, pantomime, and expressions.…”
Section: Related Work and Motivationmentioning
confidence: 99%
“…Whereas it contains a lot of data on locomotion patterns, longer activities as well as human interactions are very much underrepresented and in addition only very loosely organized. As far as humanhuman interactions are concerned, a recent motion-capture database [10] contains 20 elementary interactions (such as point to the ceiling, I am angry, pick up, etc.) each performed by one male and one female couple.…”
Section: Related Work and Motivationmentioning
confidence: 99%