Proceedings of the 16th ACM Symposium on Virtual Reality Software and Technology 2009
DOI: 10.1145/1643928.1643973
|View full text |Cite
|
Sign up to set email alerts
|

A saliency-based method of simulating visual attention in virtual scenes

Abstract: Complex interactions occur in virtual reality systems, requiring the modelling of next-generation attention models to obtain believable virtual human animations. This paper presents a saliency model that is neither domain nor task specific, which is used to animate the gaze of virtual characters. A critical question is addressed: What types of saliency attract attention in virtual environments and how can they be weighted to drive an avatar's gaze? Saliency effects were measured as a function of their total fr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 21 publications
(18 citation statements)
references
References 32 publications
0
18
0
Order By: Relevance
“…When a frequently searched game object is modified to share perceptual features such as color or orientation with a target item, the item will attract attention [Bernhard et al 2011]. Saliency models have been employed to animate the gaze behavior of virtual characters [Oyekoya et al 2009] and crowds [Grillon and Thalmann 2009].…”
Section: Related Workmentioning
confidence: 99%
“…When a frequently searched game object is modified to share perceptual features such as color or orientation with a target item, the item will attract attention [Bernhard et al 2011]. Saliency models have been employed to animate the gaze behavior of virtual characters [Oyekoya et al 2009] and crowds [Grillon and Thalmann 2009].…”
Section: Related Workmentioning
confidence: 99%
“…In this paper, we extend a previous work on gaze modelling in two ways: we develop the attention model to drive both head and eye gaze and we integrate this model into SL. This provides an excellent platform for testing in general avatar encounters, and in the Experiments section, we discuss user experiments based on this platform.…”
Section: Methodsmentioning
confidence: 99%
“…The attention model is designed to adapt to the complex interaction within the scene and is based on data‐driven modelling of gaze behaviour.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In this implementation, eye tracking is used to deliver gaze data, which acts as input to the lid saccade model. The operation of the lid saccade model may then be interrupted by blink signals: detected by the eye tracker; generated from a gaze model 34 ; or inferred from other behavioural tracking devices including microphones monitoring verbal utterances, or body tracking monitoring gestures. Following completion of a blink, the lid saccade model then resumes control.…”
Section: Blink Modelmentioning
confidence: 99%