Figure 1: We present an ownership-preserving direct manipulation technique in augmented reality, which allows interaction with remote devices in a ubicomp environment with the help of a long virtual arm. While the user's real hand is close to the body the virtual arm is of normal length (A) and by simply reaching out the user can make it extend to access remote devices in the room. For instance we allow adjusting the height of a table (B), opening and closing a curtain (C) and adjusting the angle of a tilting surface (D). ABSTRACTIn this paper, we explore how users can control remote devices with a virtual long arm, while preserving the perception that the artificial arm is actually part of their own body. Instead of using pointing, speech, or a remote control, the users' arm is extended in augmented reality, allowing access to devices that are out of reach. Thus, we allow users to directly manipulate real-world objects from a distance using their bare hands. A core difficulty we focus on is how to maintain ownership for the unnaturally long virtual arm, which is the strong feeling that one's limbs are actually part of the own body. Fortunately, what the human brain experiences as being part of the own body is very malleable and we find that during interaction the user's virtual arm can be stretched to more than twice its real length, without breaking the user's sense of ownership for the virtual limb.
Cursors, avatars, virtual hands or tools, and other rendered graphical objects, enable users to interact with computers such as PCs, game consoles or virtual reality systems. We analyze the role of these various objects from a user perspective under the unifying concept of "User Representations". These representations are virtual objects that artificially extend the users' physical bodies, enabling them to manipulate the virtual environment by performing motor actions that are continuously mapped to their User Representations. In this paper, we identify a set of concepts that are relevant for different User Representations, and provide a multidisciplinary review of the multisensory and cognitive factors underlying the control and subjective experience of User Representations. These concepts include visual appearance, multimodal feedback, sense of agency, input methods, peripersonal space, visual perspective, and body ownership. We further suggest a research agenda for these concepts, which can lead the human-computer interaction community towards a wider perspective of how users perceive and interact through their User Representations.
Figure 1. We created a virtual environment for designers, in which they can generate and arrange an arbitrary number of devices that execute realworld web applications (A). This allows simulation of existing interactive spaces and multi-device systems (B, C) [71], as well as sketching of new interactions with diverse tracking systems or futuristic devices, e.g., a cylindrical touch screen (D).
Figure 1: Based on a use case at Grundfos, we implemented a virtual training simulation of a structured maintenance task for training of novice technicians. In a user study, we evaluated the potential of this VR Training simulation (C), in comparison to two traditional training methods: (A) Pairwise Training, (B) Video Training.
Human sensory processing is sensitive to the proximity of stimuli to the body. It is therefore plausible that these perceptual mechanisms also modulate the detectability of content in VR, depending on its location. We evaluate this in a user study and further explore the impact of the user's representation during interaction. We also analyze how embodiment and motor performance are influenced by these factors. In a dual-task paradigm, participants executed a motor task, either through virtual hands, virtual controllers, or a keyboard. Simultaneously, they detected visual stimuli appearing in different locations. We found that, while actively performing a motor task in the virtual environment, performance in detecting additional visual stimuli is higher when presented near the user's body. This effect is independent of how the user is represented and only occurs when the user is also engaged in a secondary task. We further found improved motor performance and increased embodiment when interacting through virtual tools and hands in VR, compared to interacting with a keyboard. This study contributes to better understanding the detectability of visual content in VR, depending on its location in the virtual environment, as well as the impact of different user representations on information processing, embodiment, and motor performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.