Mobile devices are a common target for augmented reality applications, especially for showing contextual information in buildings or construction sites. A prerequisite of contextual information display is the localization of objects and the device in the real world. In this paper, we present our approach to the problem of mobile indoor localization with a given building model. The approach does not use external sensors or input. Accurate external sensors such as stationary cameras may be expensive and difficult to set up and maintain. Relying on already existing external sources may also prove to be difficult, as especially inside buildings, Internet connections can be unreliable and GPS signals can be inaccurate. Therefore, we try to find a localization solution for augmented reality devices that can accurately localize itself only with data from internal sensors and preexisting information about the building. If a building has an accurate model of its geometry, we can use modern spatial mapping techniques and point-cloud matching to find a mapping between local device coordinates and global model coordinates. We use normal analysis and 2D template matching on an inverse distance map to determine this mapping. The proposed algorithm is designed to have a high speed and efficiency, as mobile devices are constrained by hardware limitations. We show an implementation of the algorithm on the Microsoft HoloLens, test the localization accuracy, and offer use cases for the technology.
Vision-based tracking systems enable the optimization of the productivity and safety management on construction sites by monitoring the workers’ movements. However, training and evaluation of such a system requires a vast amount of data. Sufficient datasets rarely exist for this purpose. We investigate the use of synthetic data to overcome this issue. Using 3D computer graphics software, we model virtual construction site scenarios. These are rendered for the use as a synthetic dataset which augments a self-recorded real world dataset. Our approach is verified by means of a tracking system. For this, we train a YOLOv3 detector identifying pedestrian workers. Kalman filtering is applied to the detections to track them over consecutive video frames. First, the detector’s performance is examined when using synthetic data of various environmental conditions for training. Second, we compare the evaluation results of our tracking system on real world and synthetic scenarios. With an increase of about 7.5 percentage points in mean average precision, our findings show that a synthetic extension is beneficial for otherwise small datasets. The similarity of synthetic and real world results allow for the conclusion that 3D scenes are an alternative to evaluate vision-based tracking systems on hazardous scenes without exposing workers to risks.
An open challenge on the road to unraveling the brain's multilevel organization is establishing techniques to research connectivity and dynamics at different scales in time and space, as well as the links between them. This work focuses on the design of a framework that facilitates the generation of multiscale connectivity in large neural networks using a symbolic visual language capable of representing the model at different structural levels—ConGen. This symbolic language allows researchers to create and visually analyze the generated networks independently of the simulator to be used, since the visual model is translated into a simulator-independent language. The simplicity of the front end visual representation, together with the simulator independence provided by the back end translation, combine into a framework to enhance collaboration among scientists with expertise at different scales of abstraction and from different fields. On the basis of two use cases, we introduce the features and possibilities of our proposed visual language and associated workflow. We demonstrate that ConGen enables the creation, editing, and visualization of multiscale biological neural networks and provides a whole workflow to produce simulation scripts from the visual representation of the model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.