Abstract3D reconstruction and visualization of environments is increasingly important and there is a wide range of application areas where 3D models are required. Reconstructing 3D models has therefore been a major research focus in academia and industry. For example, large scale efforts for the reconstruction of city models at a global scale are currently underway. A major limitation in those efforts is that creating realistic 3D models of environments is a tedious and time consuming task. In particular, two major issues persist which prevent a broader adoption of 3D modeling techniques: a lack of affordable 3D scanning devices that enable an easy acquisition of 3D data and algorithms capable of automatically processing this data into 3D models. We believe that autonomous technologies, which are capable of generating textured 3D models of real environments, will make the modeling process affordable and enable a wide variety of new applications. This thesis addresses the problem of automatic 3D reconstruction and we present a system for unsupervised reconstruction of textured 3D models in the context of modeling indoor environments. The contributions are solutions to all aspects of the modeling process and an integrated system for the automatic creation of large scale 3D models. We first present a robotic data acquisition system which allows us to automatically scan large environments in a short amount of time. We also propose a calibration procedure for this system that determines the internal and external calibration which is necessary to transform data from one sensor into the coordinate system of another sensor. Next, we present solutions for the multi-view data registration problem, which is essentially the problem of aligning the data of multiple 3D scans into a common coordinate system. We propose a novel nonrigid registration method based on a probabilistic SLAM framework. This method incorporates spatial correlation models as map priors to guide the optimization. Scans are aligned by optimizing robot pose estimates as well as scan points. We show that this non-rigid registration significantly improves the alignment. Next, we address the problem of reconstructing a consistent 3D surface representation from the registered point clouds. We propose a volumetric surface reconstruction method based on a Poisson framework. In a second step, we improve the accuracy of this reconstruction by optimizing the mesh vertices to achieve a better approximation of the true surface. We demonstrate that this method is very suitable for the reconstruction of indoor environments. Finally, we present a solution to the reconstruction of texture maps from multiple scans. Our texture reconstruction approach partitions the surface into segments, unfolds each segment onto a plane, and reconstructs a texture map by blending multiple views into a single composite. This technique results in a very realistic reconstruction of the surface appearance and greatly enhances the visual impression by adding more realism.