The augmented reality (AR) research community has been developing a manifold of ideas and concepts to improve the depiction of virtual objects in a real scene. In contrast, current AR applications require the use of unwieldy equipment which discourages their use. In order to essentially ease the perception of digital information and to naturally interact with the pervasive computing landscape, the required AR equipment has to be seamlessly integrated into the user's natural environment. Considering this basic principle, this paper proposes the car as an AR apparatus and presents an innovative visualization paradigm for navigation systems that is anticipated to enhance user interaction.
Abstract. At present, various types of car navigation systems are progressively entering the market. Simultaneously, mobile outdoor navigation systems for pedestrians and electronic tourist guides are already available on handheld computers. Although, the depiction of the geographical information on these appliances has increasingly improved during the past years, users are still handicapped having to interpret an abstract metaphor on the navigation display and translate it to their real world. This paper introduces an innovative visual paradigm for (mobile) navigation systems, embodied within an application framework that contributes to the ease of perception of navigation information by its users through mixed reality.
Abstract.A visual representation of an object must meet at least three basic requirements. First, it must allow identification of the object in the presence of slight but unpredictable changes in its visual appearance. Second, it must account for larger changes in appearance due to variations in the object's fundamental degrees of freedom, such as, e.g., changes in pose. And last, any object representation must be derivable from visual input alone, i.e., it must be learnable. We here construct such a representation by deriving transformations between the different views of a given object, so that they can be parameterized in terms of the object's physical degrees of freedom. Our method allows to automatically derive the appearance representations of an object in conjunction with their linear deformation model from example images. These are subsequently used to provide linear charts to the entire appearance manifold of a three-dimensional object. In contrast to approaches aiming at mere dimensionality reduction the local linear charts to the object's appearance manifold are estimated on a strictly local basis avoiding any reference to a metric embedding space to all views. A real understanding of the object's appearance in terms of its physical degrees of freedom is this way learned from single views alone.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.