The rapid development of geo-referenced information changed the way on how we access and interlink data. Smartphones as enabling devices for information access are main driving factor. Thus, the hash key to information is the actual position registered via camera and sensory of the mobile device. A rising technology in this context is Augmented Reality (AR) as its fuses the real world captured with the smartphone camera with geo-referenced data. The technological building blocks analyse the intrinsic sensor data (camera, GPS, inertial) to derive a detailed pose of the smartphone aiming to align geo-referenced information to our real environment. In particular, this is interesting to applications where 3D models are used in planning and organization processes as, e.g., facility management. Here, Building Information Models (BIM) were established in order to hold "as built" information, but also to manage the vast amount of additional information coming with the design, s uch as building components, properties, maintenance logs, documentation, etc. One challenge is to enable stakeholders involved in the overall building lifecycle to get mobile access to the management system within on-site inspections and to automatise feedback of newly generated information into the BIM. This paper describes a new AR framework that offers on-site access to BIM information and user centric annotation mechanism
Augmented Reality (AR) points out to be a good technology for training in the field of maintenance and assembly, as instructions or rather location-dependent information can be directly linked and/or attached to physical objects. Since objects to maintain usually contain a large number of similar components (e.g. screws, plugs, etc.) the provision of location-dependent information is vitally important. Another advantage is that AR-based training takes place with the real physical devices of the training scenario. Thus, the trainee also practices the real use of the tools whereby the corresponding sensorimotor skills are trained.
Until recently, depth sensing cameras have been used almost exclusively in research due to the high costs of such specialized equipment. With the introduction of the Microsoft Kinect device, real-time depth imaging is now available for the ordinary developer at low expenses and so far it has been received with great interest from both the research and hobby developer community. The underlying OpenNI framework not only allows to extract the depth image from the camera, but also provides tracking information of gestures or user skeletons. In this paper, we present a framework to include depth sensing devices into X3D in order to enhance visual fidelity of X3D Mixed Reality applications by introducing some extensions for advanced rendering techniques. We furthermore outline how to calibrate depth and image data in a meaningful way through calibration for devices that do not already come with precalibrated sensors, as well as a discussion of some of the OpenNI functionality that X3D can benefit from in the future
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.