2011
DOI: 10.3758/s13428-011-0128-2
|View full text |Cite
|
Sign up to set email alerts
|

Using modified incremental chart parsing to ascribe intentions to animated geometric figures

Abstract: People spontaneously ascribe intentions on the basis of observed behavior, and research shows that they do this even with simple geometric figures moving in a plane. The latter fact suggests that 2-D animations isolate critical information-object movement-that people use to infer the possible intentions (if any) underlying observed behavior. This article describes an approach to using motion information to model the ascription of intentions to simple figures. Incremental chart parsing is a technique developed … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
4
0

Year Published

2014
2014
2016
2016

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 47 publications
0
4
0
Order By: Relevance
“…What does an object's behavior reveal about its mental state? These questions have historically received a great deal of attention in developmental psychology (e.g., Gelman, Durgin, & Kaufman, 1995;Gergely, Nádasdy, Csibra, & Bíró, 1995;Johnson, 2000;Kuhlmeier, Wynn, & Bloom, 2003;Onishi & Baillargeon, 2005;Williams, 2000) and philosophy (e.g., Goldman, 2006;Heal, 1996;Nichols & Stich, 1998;Stich & Nichols, 2003), and are receiving increasing attention in psychophysics with adult subjects (Barrett, Todd, Miller, & Blythe, 2005;Blythe, Todd, & Miller, 1999;Gao, McCarthy, & Scholl, 2010;McAleer & Pollick, 2008;Pratt, Radulescu, Guo, & Abrams, 2010;Pantelis & Feldman, 2012;Tremoulet & Feldman, 2000, 2006Zacks, Kumar, Abrams, & Mehta, 2009) and in computational modeling (Baker, Saxe, & Tenenbaum, 2009;Burgos-Artizzu, Dollár, Lin, Anderson, & Perona, 2012;Crick & Scassellati, 2010;Feldman & Tremoulet, 2008;Kerr & Cohen, 2010;Pantelis et al, 2014;Pautler, Koenig, Quek, & Ortony, 2011;Thibadeau, 1986). Many of these past studies have relied on the direct parametric manipulation of the physical qualities of stimulus objects (e.g., their velocity or acceleration), and measurement of the resulting subjective percepts (such as perceived animacy).…”
mentioning
confidence: 99%
“…What does an object's behavior reveal about its mental state? These questions have historically received a great deal of attention in developmental psychology (e.g., Gelman, Durgin, & Kaufman, 1995;Gergely, Nádasdy, Csibra, & Bíró, 1995;Johnson, 2000;Kuhlmeier, Wynn, & Bloom, 2003;Onishi & Baillargeon, 2005;Williams, 2000) and philosophy (e.g., Goldman, 2006;Heal, 1996;Nichols & Stich, 1998;Stich & Nichols, 2003), and are receiving increasing attention in psychophysics with adult subjects (Barrett, Todd, Miller, & Blythe, 2005;Blythe, Todd, & Miller, 1999;Gao, McCarthy, & Scholl, 2010;McAleer & Pollick, 2008;Pratt, Radulescu, Guo, & Abrams, 2010;Pantelis & Feldman, 2012;Tremoulet & Feldman, 2000, 2006Zacks, Kumar, Abrams, & Mehta, 2009) and in computational modeling (Baker, Saxe, & Tenenbaum, 2009;Burgos-Artizzu, Dollár, Lin, Anderson, & Perona, 2012;Crick & Scassellati, 2010;Feldman & Tremoulet, 2008;Kerr & Cohen, 2010;Pantelis et al, 2014;Pautler, Koenig, Quek, & Ortony, 2011;Thibadeau, 1986). Many of these past studies have relied on the direct parametric manipulation of the physical qualities of stimulus objects (e.g., their velocity or acceleration), and measurement of the resulting subjective percepts (such as perceived animacy).…”
mentioning
confidence: 99%
“…As in previous research [3,12], we aim to develop algorithms that can automatically interpret and narrate observed behavior, in much the same fashion as the subjects in Heider and Simmel's original study. This authoring tool allows us to collect hundreds or thousands of movie and narration pairs from volunteers, which can be used both to evaluate the performance of our algorithms and to model the relationships between observed action, intentions, and the language used to narrate interpretations.…”
Section: Discussionmentioning
confidence: 99%
“…Thibadeau (1986) takes a symbolic approach, representing the coordinates of each object in each frame of original film, which are matched to defined action schemas, such as opening the door or going outside the box. Pautler et al (2011) follows a related approach, beginning with object trajectory information from an animated recreation of the Heider-Simmel film. An incremental chart parsing algorithm with a hand-authored action grammar is then applied to recognize character actions as well as their intentions.…”
Section: Triangle-copamentioning
confidence: 99%