2007
DOI: 10.1145/1265957.1265960
|View full text |Cite
|
Sign up to set email alerts
|

Modeling embodied visual behaviors

Abstract: To make progess in understanding human visuomotor behavior, we will need to understand its basic components at an abstract level. One way to achieve such an understanding would be to create a model of a human that has a sufficient amount of complexity so as to be capable of generating such behaviors. Recent technological advances have been made that allow progress to be made in this direction. Graphics models that simulate extensive human capabilities can be used as platforms from which to develop synthetic mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

5
132
0

Year Published

2007
2007
2017
2017

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 92 publications
(142 citation statements)
references
References 39 publications
5
132
0
Order By: Relevance
“…Thus, in hybrid exemplar models, production targets can be generated on the basis of phonemic categories alone, but will normally be influenced also by larger units, especially for experienced speakers. This idea accords fairly well with the principles of adaptive resonance theory (Grossberg, 2003), of task dynamics (Saltzman, 1995), where control of behaviour becomes more global as skill develops (for an embodied formulation, see Simko and Cummins, 2010), and indeed of perception-action robotics models (e.g., Sprague, Ballard and Robinson, 2007;Roy, 2005).…”
Section: Discussionsupporting
confidence: 66%
“…Thus, in hybrid exemplar models, production targets can be generated on the basis of phonemic categories alone, but will normally be influenced also by larger units, especially for experienced speakers. This idea accords fairly well with the principles of adaptive resonance theory (Grossberg, 2003), of task dynamics (Saltzman, 1995), where control of behaviour becomes more global as skill develops (for an embodied formulation, see Simko and Cummins, 2010), and indeed of perception-action robotics models (e.g., Sprague, Ballard and Robinson, 2007;Roy, 2005).…”
Section: Discussionsupporting
confidence: 66%
“…Some of this work makes use of advances in dynamic camera placement in 3D scenes [395] to extend the idea of "virtual vision" [396] or "embodied vision" [397] to "human animats" [398,399].…”
Section: Visionmentioning
confidence: 99%
“…Sprague and Ballard [5] developed a reward-based perceptual coordination mechanism for a simulated humanagent. The agent performs a set of behaviours concurrently (where each behaviour has a separate goal), by sharing the set of actions amongst them.…”
Section: B Related Workmentioning
confidence: 99%