Augmented Reality (AR) applications provide a high potential to support the user in solving tasks e.g. in scenarios from the domain of assembly, maintenance and repair. AR applications enrich the physical world around the user with additional useful virtual information. However, the integration of physical objects into AR user interfaces poses a challenge to developers. In addition, the information displayed in such an environment often strongly depends on the user's current task. In this paper we present an approach to specify the integration of real objects into AR user interfaces and the task dependent visualization of AR user interface elements on the design level. To describe user tasks, the AR user interface structure and the relations between them we use UML activity diagrams in combination with the Scene Structure and Integration Modelling Language (SSIML), a visual language which provides support for the description of 3D user interface structures. Furthermore, code can be generated from the visual models. The proposed concepts are illustrated by an AR application example from the domain of assembly and lead to a new language called SSIML/AR. INTRODUCTIONIn an Augmented Reality (AR) environment the real world around the user is enriched with virtual content such as 3D objects. In such an environment, the user manipulates real objects to generate system input data. Thus, both real and virtual objects become parts of the user interface. Azuma [1] characterizes AR as a system that combines the real and virtual world, works in real-time (i.e. the system reacts on user actions in real-time) and superimposes virtual objects properly on real objects in 3D.AR technologies provide a high potential for domains such as medicine or assembly, maintenance and repair. However, most of the previous research efforts were spent on AR base technologies. Thus, AR applications are often created from scratch using only low-level toolkits. This kind of development can become a timeconsuming and error-prone work in regard to the creation of more complex AR systems. Especially the design of the AR user interface, one of the most important components of an AR system, often poses a challenge to the developer.There is still a lack of concepts and tools for a structured and more efficient development. In particular, the possibility to plan and design the application on a more abstract level could speed up and ease the development process.In classical software engineering, visual modelling languages have proven to be suitable instruments for abstract software design. The Unified Modelling Language UML [2] is the de-facto standard for visual software design. It allows a (semi-)formal software specification. While software components of an AR system can be specified with UML especially the AR user interface can not be specified without extending UML. For example, the UML does not provide an explicit distinction between real and virtual objects. However, an AR user interface contains real (tangible) objects.In this paper we propose a ne...
Augmented Reality (AR) technologies open up new possibilities especially for task-focused domains such as assembly and maintenance. However, it can be noticed that there is still a lack of concepts and tools for a structured AR development process and an application specification above the code level. To address this problem we introduce SSIML/AR, a visual modeling language for the abstract specification of AR applications in general and AR user interfaces in particular. With SSIML/AR, three different aspects of AR user interfaces can be described: The user interface structure, the presentation of relevant information depending on the user’s current task and the integration of the user interface with other system components. Code skeletons can be generated automatically from SSIML/AR models. This enables the seamless transition from the design level to the implementation level. In addition, we sketch how SSIML/AR models can be integrated in an overall AR development process.
We propose a novel data-driven animation method for the synthesis of natural looking human grasping. Motion data captured from human grasp actions is used to train a probabilistic model of the human grasp space. This model greatly reduces the high number of degrees of freedom of the human hand to a few dimensions in a continuous grasp space. The low dimensionality of the grasp space in turn allows for efficient optimization when synthesizing grasps for arbitrary objects. The method requires only a short training phase with no need for preprocessing of graphical objects for which grasps are to be synthesized.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.