Proceedings of the Symposium on Spatial User Interaction 2018
DOI: 10.1145/3267782.3267791
|View full text |Cite
|
Sign up to set email alerts
|

Injecting Nonverbal Mimicry with Hybrid Avatar-Agent Technologies

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(22 citation statements)
references
References 28 publications
0
22
0
Order By: Relevance
“…To our knowledge, the combined replication of body movement, facial expression, and gaze behavior in a shared and HMD-based avatar-mediated system has not yet been presented, and represents a first requirement (RE1) in order to support variable social augmentations features. With regard to the augmentation of behaviors, previous approaches are limited to linear transformations, such as dampening or amplifying of facial cues [Boker et al 2009;Oh et al 2016;Roth et al 2018c] or modification based on intra-personal information [Bailenson et al 2006]. Bailenson et al [2008] discussed the presentation of real-time information of social cues during communication, and Roth et al [2018b] implemented a speaker-listener based paradigm of gaze augmentation.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…To our knowledge, the combined replication of body movement, facial expression, and gaze behavior in a shared and HMD-based avatar-mediated system has not yet been presented, and represents a first requirement (RE1) in order to support variable social augmentations features. With regard to the augmentation of behaviors, previous approaches are limited to linear transformations, such as dampening or amplifying of facial cues [Boker et al 2009;Oh et al 2016;Roth et al 2018c] or modification based on intra-personal information [Bailenson et al 2006]. Bailenson et al [2008] discussed the presentation of real-time information of social cues during communication, and Roth et al [2018b] implemented a speaker-listener based paradigm of gaze augmentation.…”
Section: Discussionmentioning
confidence: 99%
“…The replication of behavior was limited to keyboard input, and participants reported a lack of feedback due to missing body movement and body language [Tromp et al 1998]. Steptoe, Roberts and colleagues investigated additional modalities for social interaction by including eye gaze [Hart et al 2018a], © 2018 Hart, et al) c) augmented nonverbal mimicry [Roth et al 2018c] (© 2018 Roth, et al), d) visual transformation, substitution, and amplification of social behaviors [Roth et al 2018a] (© 2018 IEEE /Roth, et al). et al 2009].…”
Section: Avatar-mediated Systemsmentioning
confidence: 99%
See 2 more Smart Citations
“…The non-verbal communication behavior is usually presented in the mutual conversion through face-to-face, video conferencing [Whittaker 2003] or embodied avatar in VR [Dodds et al 2011;Fabri et al 2002;Roth et al 2018;Smith and Neff 2018]. The non-verbal cues delivered by the virtual characters in the collaborative virtual environment influence the efficiency of task performance [Roth et al 2018;Smith and Neff 2018]. The mirror is usually used in the singleuser scenario [Collingwoode-Williams et al 2017;Kilteni et al 2013;Maister et al 2015] to evaluate the communication behavior such as non-verbal cue.…”
Section: Communication Behavior In Vrmentioning
confidence: 99%