a) Our method Jave = 0.936 Jmin = 0.609 (b) CubeCover Jave = 0.902 Jmin = 0.073 (c) Our method Jave = 0.947 Jmin = 0.658 (d) Volumetric PolyCube Jave = 0.950 Jmin = 0.131 Figure 1: High quality all-hex meshes generated by our method. Comparisons with CubeCover [Nieser et al. 2011] and volumetric PolyCube [Gregson et al. 2011] demonstrate that the hex meshes by our method are superior in mesh quality (the minimal scaled Jacobian of hexes is shown in the figure, bigger is better) and singularity placement (see the zoom-in views for comparison).
AbstractDecomposing a volume into high-quality hexahedral cells is a challenging task in geometric modeling and computational geometry. Inspired by the use of cross field in quad meshing and the CubeCover approach in hex meshing, we present a complete all-hex meshing framework based on singularity-restricted field that is essential to induce a valid all-hex structure. Given a volume represented by a tetrahedral mesh, we first compute a boundary-aligned 3D frame field inside it, then convert the frame field to be singularity-restricted by our effective topological operations. In our all-hex meshing framework, we apply the CubeCover method to achieve the volume parametrization. For reducing degenerate elements appearing in the volume parametrization, we also propose novel tetrahedral split operations to preprocess singularity-restricted frame fields. Experimental results show that our algorithm generates high-quality all-hex meshes from a variety of 3D volumes robustly and efficiently.
We present an interactive approach to semantic modeling of indoor scenes with a consumer-level RGBD camera. Using our approach, the user first takes an RGBD image of an indoor scene, which is automatically segmented into a set of regions with semantic labels. If the segmentation is not satisfactory, the user can draw some strokes to guide the algorithm to achieve better results. After the segmentation is finished, the depth data of each semantic region is used to retrieve a matching 3D model from a database. Each model is then transformed according to the image depth to yield the scene. For large scenes where a single image can only cover one part of the scene, the user can take multiple images to construct other parts of the scene. The 3D models built for all images are then transformed and unified into a complete scene. We demonstrate the efficiency and robustness of our approach by modeling several real-world scenes.
DROID-SLAM COLMAP NICER-SLAM Ground Truth RGB input RGB-D input NICE-SLAM Figure 1: 3D Dense Reconstruction and Rendering from Different SLAM Systems. On the Replica dataset [49], we compare to dense RGB-D SLAM method NICE-SLAM [76], and monocular SLAM approaches COLMAP [46], DROID-SLAM [57], and our proposed NICER-SLAM.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.