The rapid development of geo-referenced information changed the way on how we access and interlink data. Smartphones as enabling devices for information access are main driving factor. Thus, the hash key to information is the actual position registered via camera and sensory of the mobile device. A rising technology in this context is Augmented Reality (AR) as its fuses the real world captured with the smartphone camera with geo-referenced data. The technological building blocks analyse the intrinsic sensor data (camera, GPS, inertial) to derive a detailed pose of the smartphone aiming to align geo-referenced information to our real environment. In particular, this is interesting to applications where 3D models are used in planning and organization processes as, e.g., facility management. Here, Building Information Models (BIM) were established in order to hold "as built" information, but also to manage the vast amount of additional information coming with the design, s uch as building components, properties, maintenance logs, documentation, etc. One challenge is to enable stakeholders involved in the overall building lifecycle to get mobile access to the management system within on-site inspections and to automatise feedback of newly generated information into the BIM. This paper describes a new AR framework that offers on-site access to BIM information and user centric annotation mechanism
Until recently, depth sensing cameras have been used almost exclusively in research due to the high costs of such specialized equipment. With the introduction of the Microsoft Kinect device, real-time depth imaging is now available for the ordinary developer at low expenses and so far it has been received with great interest from both the research and hobby developer community. The underlying OpenNI framework not only allows to extract the depth image from the camera, but also provides tracking information of gestures or user skeletons. In this paper, we present a framework to include depth sensing devices into X3D in order to enhance visual fidelity of X3D Mixed Reality applications by introducing some extensions for advanced rendering techniques. We furthermore outline how to calibrate depth and image data in a meaningful way through calibration for devices that do not already come with precalibrated sensors, as well as a discussion of some of the OpenNI functionality that X3D can benefit from in the future
In this paper, we present a generic and affordable approach for an automatized and markerless capturing of movements in dance, which was developed in the Motion Bank / The Forsythe Company project (www.motionbank.org). Thereby within Motion Bank we are considering the complete digitalization workflow starting with the setup of the camera array and ending with a web-based presentation of "Online Scores" visualizing different elements of choreography. Within our project, we have used our technology in two modern dance projects, one "Large Motion Space Performance" covering a large stage in solos and trios and one "Restricted Motion Space Performance" that is suited to be captured with range cameras. The project is realized in close cooperation with different choreographers and dance companies of modern ballet and with multi-media artists forming the visual representations of dance
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.