Proceedings of the 32nd International Conference on Computer Animation and Social Agents 2019
DOI: 10.1145/3328756.3328758
|View full text |Cite
|
Sign up to set email alerts
|

Design of Seamless Multi-modal Interaction Framework for Intelligent Virtual Agents in Wearable Mixed Reality Environment

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
20
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 22 publications
(20 citation statements)
references
References 19 publications
0
20
0
Order By: Relevance
“…The co‐speech gesture system in that framework is rule‐based similar to Reference 14. This article extends the work of Ali et al, 24 with the focus on the approach to automatically generate co‐speech gesture rules from a large‐scale data for ECAs.…”
Section: Introductionmentioning
confidence: 80%
See 3 more Smart Citations
“…The co‐speech gesture system in that framework is rule‐based similar to Reference 14. This article extends the work of Ali et al, 24 with the focus on the approach to automatically generate co‐speech gesture rules from a large‐scale data for ECAs.…”
Section: Introductionmentioning
confidence: 80%
“…Due to the high memory and computation cost, this method is not feasible to run on mobile and wearable devices. For these type of devices, this method can be deployed on cloud 24 …”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Users wear AR glasses or use handheld devices such as a smartphone or tablet to see the mixed environment and interact with virtual objects in real-time. Since users can see the real environment (see Figure 1a), AR systems often require an accurate registration of virtual objects to provide seamless interactions in various situations [2,3]. Incorrect registration of a virtual object in the real space can cause unrealistic occlusions [4,5] or physically implausible situations [6,7], leading to perceptual quality degradation and breaks in presence [8].…”
Section: Introductionmentioning
confidence: 99%