Recently, indoor modeling has gained increased attention, thanks to the immense need for realizing efficient indoor location-based services. Indoor environments differ from outdoor spaces in two aspects: spaces are smaller and there are many structural objects such as walls, doors, and furniture. To model the indoor environments in a proper manner, novel data acquisition concepts and data modeling algorithms have been devised to meet the requirements of indoor spatial applications. In this realm, several research efforts have been exerted. Nevertheless, these efforts mostly suffer either from adopting impractical data acquisition methods or from being limited to 2D modeling.
To overcome these limitations, we introduce the MapSense approach, which automatically derives indoor models from 3D point clouds collected by individuals using mobile devices, such as Google Tango, Apple ARKit, and Microsoft HoloLens. To this end, MapSense leverages several computer vision and machine learning algorithms for precisely inferring the structural objects. In MapSense, we mainly focus on improving the modeling accuracy through adopting formal grammars that encode design-time knowledge, i.e., structural information about the building. In addition to modeling accuracy, MapSense considers the energy overhead on the mobile devices via developing a probabilistic quality model through which the mobile devices solely upload
high-quality
point clouds to the crowd-sensing servers. To demonstrate the performance of MapSense, we implemented a crowd-sensing Android App to collect 3D point clouds from two different buildings by six volunteers. The results showed that MapSense can accurately infer the various structural objects while drastically reducing the energy overhead on the mobile devices.