We consider the problem of finding meaningful correspondences between 3D models that are related but not necessarily very similar. When the shapes are quite different, a point‐to‐point map is not always appropriate, so our focus in this paper is a method to build a set of correspondences between shape regions or parts. The proposed approach exploits a variety of feature functions on the shapes and makes use of the key observation that points in matching parts have similar ranks in the sorting of the corresponding feature values. Our algorithm proceeds in two steps. We first build an affinity matrix between points on the two shapes, based on feature rank similarity over many feature functions. We then define a notion of stability of a pair of regions, with respect to this affinity matrix, obtained as a fixed point of a nonlinear operator. Our method yields a family of corresponding maximally stable regions between the two shapes that can be used to define shape parts. We observe that this is an instance of the biclustering problem and that it is related to solving a constrained maximal eigenvalue problem. We provide an algorithm to solve this problem that mimics the power method. We show the robustness of its output to noisy input features as well its convergence properties. The obtained part correspondences are shown to be almost perfect matches in the isometric case, and also semantically appropriate even in non‐isometric cases. We provide numerous examples and applications of this technique, for example to sharpening correspondences in traditional shape matching algorithms.
Reconstruction of example point clouds from the McGill dataset [SZM*08]. First row: Input Point cloud. Second row: Our reconstruction of the shape in the first row. Third row: Poisson surface reconstruction [KBH06] of the shape in the first row. Shapes showcased (left to right) are two chairs, cup, octopus, snake, dolphin, teddy and table respectively.
No abstract
Real-life man-made objects often exhibit strong and easily-identifiable structure, as a direct result of their design or their intended functionality. Structure typically appears in the form of individual parts and their arrangement. Knowing about object structure can be an important cue for object recognition and scene understanding -a key goal for various AR and robotics applications. However, commodity RGB-D sensors used in these scenarios only produce raw, unorganized point clouds, without structural information about the captured scene. Moreover, the generated data is commonly partial and susceptible to artifacts and noise, which makes inferring the structure of scanned objects challenging. In this paper, we organize large shape collections into parameterized shape templates to capture the underlying structure of the objects. The templates allow us to transfer the structural information onto new objects and incomplete scans. We employ a deep neural network that matches the partial scan with one of the shape templates, then match and fit it to complete and detailed models from the collection. This allows us to faithfully label its parts and to guide the reconstruction of the scanned object. We showcase the effectiveness of our method by comparing it to other state-of-the-art approaches.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.