Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction 2020
DOI: 10.1145/3374920.3374945
|View full text |Cite
|
Sign up to set email alerts
|

EGuide

Abstract: Figure 1: (a) shows a user with EGuide. Infrared cameras track the user's movements. A virtual-reality head-mounted display provides output. (b) gives an overview on the independent variables visual appearance and guidance technique that we investigated in a user study. (c) and (d) show two of the investigated guidance visualizations: One with an abstract (VA_ABSTRACT) and one with a realistic (VA_REALISTIC) visual shape. The figures also show two of the three investigated guidance techniques: In (c) the guida… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 25 publications
0
10
0
Order By: Relevance
“…Start set B contains prominent publications (over 100 citations on Google Scholar) that fit our scope: Just Follow Me [58], ShadowGuides [18], LightGuide [51], YouMove [1] and Physio@Home [53]. Start set C contains publications within the past 5 years on XR-based motion guidance across a broad research spectrum, including: Kodama et al [32] on training using virtual co-embodiment, Zhou et al [61] on motion guidance with an MR mirror, Lilija et al [39] on correction on virtual hand avatar movements, Yu et al [59] on the influence of perspective in motion guidance, and Dürr et al [14] on the virtual appearance of feedforward.…”
Section: Methodology Of Literature Reviewmentioning
confidence: 99%
“…Start set B contains prominent publications (over 100 citations on Google Scholar) that fit our scope: Just Follow Me [58], ShadowGuides [18], LightGuide [51], YouMove [1] and Physio@Home [53]. Start set C contains publications within the past 5 years on XR-based motion guidance across a broad research spectrum, including: Kodama et al [32] on training using virtual co-embodiment, Zhou et al [61] on motion guidance with an MR mirror, Lilija et al [39] on correction on virtual hand avatar movements, Yu et al [59] on the influence of perspective in motion guidance, and Dürr et al [14] on the virtual appearance of feedforward.…”
Section: Methodology Of Literature Reviewmentioning
confidence: 99%
“…Therefore, their effectiveness is limited to scenarios with a clear movement objective like dancing, yoga, and rehabilitation exercises. It makes sense why some guides were designed to focus on providing feedback on how far or close users' actions have been [42,43] with only limited feedforward.…”
Section: Guidance For Mid-air and Related Gesturesmentioning
confidence: 99%
“…Due to that, in addition to the presentation of pre-recorded dance videos, a 2D screen aligned skeleton representation was integrated in the YouMove system [1] to enable feedback provision. Whilst a skeleton presentation instead of a realistic human visualization was utilized, findings suggest that within dynamic information presentations more realistic shapes increase the acceptance [5,16] and movement accuracy [4]. By displaying the three-dimensional (3D) content in a 2D way, lower implementation costs occur.…”
Section: Two-dimensional Presentation Conceptsmentioning
confidence: 99%
“…The results indicate advantages of 3D environments for learning scenarios in terms of learning time, motion similarity and user experience over the presentation of 2D material. Apart from presenting a virtual human in an exocentric perspective, which leads to cognitive load, due to multiple stimuli and the needed effort of transferring the perceived exocentric motion to the own body [4], egocentric presentations are utilized. AR-Arm [12] is an immersive augmented reality (AR) tool to train Tai-Chi motions with regard to the upper limbs in a first-person perspective: the movement of the virtual arms are displayed in an egocentric perspective and imitated by the users which leads to benefits in terms of body ownership compared to a 2D screen method.…”
Section: Three-dimensional Presentation Conceptsmentioning
confidence: 99%
See 1 more Smart Citation