Object selection is a primary interaction technique which must be supported by any interactive three-dimensional virtual reality application. Although numerous techniques exist, few have been designed to support the selection of objects in dense target environments, or the selection of objects which are occluded from the user's viewpoint. There is, thus, a limited understanding on how these important factors will affect selection performance. In this paper, we present a set of design guidelines and strategies to aid the development of selection techniques which can compensate for environment density and target visibility. Based on these guidelines, we present two techniques, the depth ray and the 3D bubble cursor, both augmented to allow for the selection of fully occluded targets. In a formal experiment, we evaluate the relative performance of these techniques, varying both the environment density and target visibility. The results found that both of these techniques outperformed a baseline point cursor technique, with the depth ray performing best overall.
The integration of semantic information in virtual environment interaction is mostly still ad-hoc. The system is usually designed in such a way that the design of the framework incorporates the semantic information which then can be used to utilise these semantics during interaction. We introduce a model-based user interface approach which introduces semantic information, represented using ontologies, during the modelling phase. This semantic information itself is created during the design of the virtual world. The approach we propose is system independent and makes it possible for the semantic information content to be chosen and adapted in complete freedom without considering the underlying framework. We incorporate semantics in NiMMiT, our notation for multimodal interaction modelling. We present two case studies which validate the flexibility of our approach.
Despite of decades of research, creating intuitive and easy to learn interfaces for 3D virtual environments (VE) is still not obvious, requiring VE specialists to define, implement and evaluate solutions in an iterative way, often using lowlevel programming code. Moreover, quite frequently the interaction with the virtual environment may also vary dependent on the context in which it is applied, such as the available hardware setup, user experience, or the pose of the user (e.g. sitting or standing). Lacking other tools, the context-awareness of an application is usually implemented in an ad-hoc manner, using low-level programming, as well. This may result in code that is difficult and expensive to maintain. One possible approach to facilitate the process of creating these highly interactive user interfaces is by adopting a model-based user interface design. This lifts the creation of a user interface to a higher level allowing the designer to reason more in terms of high-level concepts, rather than writing programming code. In this paper, we adopt a model-based user interface design (MBUID) process for the creation of VEs, and explain how a context system using an Event-Condition-Action paradigm is added. We illustrate our approach by means of a case study.
Designing and exploring multimodal interaction techniques, such as those used in virtual environments, can be facilitated by using high-level notations. Besides task modelling, notations have been introduced at the dialog level such as our notation NiMMiT. For advanced interaction techniques, there is not yet an established approach to decide when to stop detailing the task model and continue modelling at the dialog level. Also, context-awareness is usually introduced at the task level and not at the dialog level. We show that this might cause an explosion in the amount of dialog states in situations where context-aware multimodal interaction is used in one and the same task. Therefore, we propose an approach which attempts to introduce contextual knowledge at the dialog level where transitions are chosen upon context information. We validate our approach in a case study from which we conclude that the augmented notation is easy to use and successfully introduces context at the dialog level.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.