Ubiquitous tracking setups, covering large tracking areas with many heterogeneous sensors of varying accuracy, require dedicated middleware to facilitate development of stationary and mobile applications by providing a simple interface and encapsulating the details of sensing, calibration and sensor fusion.In this paper we present a centrally coordinated peer-to-peer architecture for ubiquitous tracking, where a server computes optimal data flow configurations for sensor and application clients, which are directly exchanging tracking data with low latency using a lightweight data flow framework. The server's decisions are inferred from an actively maintained central spatial relationship graph using spatial relationship patterns.The system is compared to a previous Ubitrack implementation using the highly distributed DWARF middleware. It exhibits significantly better performance in a reference scenario. MOTIVATIONIn industrial augmented reality scenarios, there is a growing demand for integrated working environments which span large factory buildings. In such an environment, many different mobile and stationary AR-supported applications, such as logistics, production, maintenance or factory planning may coexist and require shared access to permanent tracking with varying accuracy requirements. Today, no single technology exists that satisfies the tracking requirements of all these applications and can -at least for a reasonable price -be deployed throughout such an environment. For this reason, in a realistic setup, many different tracking systems would be installed ranging from low-precision wide-area WLAN tracking to infrared-optical systems covering only small areas with high accuracy. The installation, maintenance and expansion of such a largescale heterogeneous tracking environment poses new challenges to the underlying middleware concepts.Heterogeneous wide-area tracking environments Emerging tracking methods based on technologies like WLAN or RFID provide the possibility to deploy tracking to ever-enlarging indoor areas. With increasing tracker coverage, a larger diversity of AR applications will need to share this tracking infrastructure. Stationary applications that are already in use will more and more be complemented by mobile applications that would have been completely impossible without wide-area tracking. Also, applications that are stationary today, might benefit from enlarging tracking areas and * e-mail: { huberma, pustka, keitler, echtler, klinker }@in.tum.de become more adaptive and better integrated in the productive environment. Many of these wide-area tracking systems have the drawback of being rather imprecise. Nevertheless, they serve quite well for navigation problems and can thereby bridge the gap between islands of higher tracking accuracy. Furthermore, they can provide useful initial positions to other sensors, such as markerless optical trackers [7]. There are also many examples where a fusion of measurements from different mobile and stationary sensors improves overall trac...
This paper describes the implementation of a novel prototypical Underwater Augmented Reality (UWAR) system that provides visual aids to increase commercial divers' capability to detect, perceive, and understand elements in underwater environments. During underwater operations, a great amount of stress is imposed on divers by environmental and working conditions such as pressure, visibility, weightlessness, current, etc. Those factors cause a restriction in divers' sensory inputs, cognition and memory, which are essentials for locating within the surroundings and performing their task effectively. The focus of this research was to improve some of those conditions by adding elements in divers' views in order to increase awareness and safety in commercial diving operations. We accomplished our goal by assisting divers in locating the work site, keeping informed about orientation and position (constantly), and providing a 3D virtual model for an assembling task. The system consisted of a video see-through head mounted display (HMD) with a webcam in front of it, protected by a custom waterproof housing placed over the diving mask. As a very first step, optical square-markers were used for positioning and orientation tracking purposes (POSE). The tracking was implemented with a ubiquitous-tracking software (Ubitrack). Finally, we discuss the possible implications of a diver-machine synergy. I.
Abstract. There exist many infrared inside-out 6-DOF pose tracking configurations with cameras mounted rigidly to the environment. In such a setup, tracking is inherently impossible for IR targets inside/below/behind other opaque objects (occlusion problem). We present a solution for the integration of an additional, mobile IR tracking system to overcome this problem. The solution consists of an indirect tracking setup where the stationary cameras track the mobile cameras which in turn track the target. Accuracy problems that are inherent to such an indirect tracking setup, are tackled by an error correction mechanism based on reference points in the scene that are known to both tracking systems. An evaluation demonstrates that, in naive indirect tracking without error correction, the major source of error consists in a wrong detection of orientation of the mobile system and that this source of error can be practically eliminated by our error correction mechanisms. Keywords: Augmented Reality, indirect tracking, sensor fusion, absolute orientation problem. MotivationOne of the currently most common tracking setups for AR and VR applications consists of an outside-in configuration with a number of infrared cameras mounted rigidly to the environment, observing a fixed volume within their midst. The camera arrangement imposes restrictions toward tracking moveable objects inside/below/behind other opaque objects in the scene. We call this the occlusion problem. It is not generally solvable by adding additional cameras to the classical outside-in setup since first, occlusions generated by scene objects cannot always be known in advance and second, the scene may offer only small and varying viewing angles to the outside, which cannot simply be covered by adding some more cameras. This is especially true for trackable objects surrounded by other objects, e.g. a tool inside a car body.Our indirect tracking approach adds an additional, mobile IR tracking system which can be placed in the scene on-the-fly such that it can see trackable objects that are hidden to the stationary cameras. The mobile setup itself is equipped with a marker so that its pose can be tracked by the stationary setup (see Figure 1(a)). Figure 1(b) shows how the proposed solution would look like in AR-stud-welding, one of our industrial scenarios where the stud-welding gun has to be tracked inside the car body so that navigational information about the next welding position can be shown on a display attached to the welding gun. This application is already in productive use but until now, it suffers from the restrictions of classical outside-in tracking described above.
In the Shader Lamps concept, a projector-camera system augments physical objects with projected virtual textures, provided that a precise intrinsic and extrinsic calibration of the system is available. Calibrating such systems has been an elaborate and lengthy task in the past and required a special calibration apparatus. Self-calibration methods in turn are able to estimate calibration parameters automatically with no effort. However they inherently lack global scale and are fairly sensitive to input data. We propose a new semi-automatic calibration approach for projector-camera systems that - unlike existing auto-calibration approaches - additionally recovers the necessary global scale by projecting on an arbitrary object of known geometry. To this end our method combines surface registration with bundle adjustment optimization on points reconstructed from structured light projections to refine a solution that is computed from the decomposition of the fundamental matrix. In simulations on virtual data and experiments with real data we demonstrate that our approach estimates the global scale robustly and is furthermore able to improve incorrectly guessed intrinsic and extrinsic calibration parameters thus outperforming comparable metric rectification algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.