1990
DOI: 10.1002/vis.4340010106
|View full text |Cite
|
Sign up to set email alerts
|

A vision‐based approach to behavioural animation

Abstract: This paper presents an innovative way of animating actors at a high level based on the concept of synthetic vision. The objective is simple: to create an animation involving a synthetic actor automatically moving in a corridor avoiding objects and other synthetic actors. To simulate this behaviour, each synthetic actors uses a synthetic vision as its perception of the world and so as the unique input to its behavioural model. This model is based on the concept of displacement local automata (DLA), which is sim… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
47
0

Year Published

2000
2000
2016
2016

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 124 publications
(47 citation statements)
references
References 12 publications
0
47
0
Order By: Relevance
“…Other motion control schemes were developed as finite state machines [174], which allowed animators to develop procedural rules for characters [175]. Also, control schemes within origins outside computing have been woven into computing (see Pelechano et al [64] for an excellent overview): physics models are popularly used, as are psychology-like approaches [176,177], cognitive schemes [170,178,179], group traits derived from collective human and animal behavior [180][181][182], machine-learning models that build control functions from trajectory data of real people [183][184][185], and schemes that afford computational efficiency in control across complex solution spaces [186,187]. Of particular relevance is the tradition of using the built environment (urban morphology, road networks, naturalistic paths, and implied movement effort) to impose hierarchies or abstractions that might ease look-up schemes in model databases, balance rendering loads in animation, and scale crowds to large populations [188][189][190][191][192][193][194][195].…”
Section: Animationmentioning
confidence: 99%
See 2 more Smart Citations
“…Other motion control schemes were developed as finite state machines [174], which allowed animators to develop procedural rules for characters [175]. Also, control schemes within origins outside computing have been woven into computing (see Pelechano et al [64] for an excellent overview): physics models are popularly used, as are psychology-like approaches [176,177], cognitive schemes [170,178,179], group traits derived from collective human and animal behavior [180][181][182], machine-learning models that build control functions from trajectory data of real people [183][184][185], and schemes that afford computational efficiency in control across complex solution spaces [186,187]. Of particular relevance is the tradition of using the built environment (urban morphology, road networks, naturalistic paths, and implied movement effort) to impose hierarchies or abstractions that might ease look-up schemes in model databases, balance rendering loads in animation, and scale crowds to large populations [188][189][190][191][192][193][194][195].…”
Section: Animationmentioning
confidence: 99%
“…This can also be efficient, as the algorithms for ray-casting are well-known and well-optimized, in particular on graphics processing hardware [383,384]. The number, direction, and length of the rays can also be focused using scene culling [385], masking [179], proactive vision that estimates likely trajectories [178,181], or equivalents [386].…”
Section: Visionmentioning
confidence: 99%
See 1 more Smart Citation
“…These sensors constitute a starting point to implement behavior such as direct vision during a move, handling of objects and responding to sounds or words. Our ALifeE integrates in an important way the main virtual sensors of an AVA as in the following research: Virtual Vision proposed by, [2][3][4] Virtual Audition proposed by 5 and Virtual Touch proposed by. 6 After acquiring the information, the basic perceptive part of the AVA is carried out by the Flexible Perception Pipeline approach proposed by.…”
Section: Virtual Sensors Backgroundmentioning
confidence: 99%
“…In the former the scene observed by the digital actors is rendered and information is extracted from the resulting image using image processing algorithm [20,17,9]. In these approach we have only access to a limited amount of information and image processing is usually time consuming.…”
Section: Digital Actors Controlmentioning
confidence: 99%