3D shape retrieval has always been a hot research topic in the field of computer vision, and the research goal is to perform fast and efficient retrieval to obtain 3D shapes that meet user needs. With the rapid development and popularization of touch screen devices, hand-drawn sketches have undoubtedly become the most convenient and user-friendly input form. However, the huge difference between the 3D shape and the 2D sketch is the main challenge that affects retrieval performance. In this paper, we propose a method of adding a sketch and view feature similarity comparison module during the training process to obtain the scores for the final feature descriptors under the premise of feature extraction of the 3D shape based on multi-view. Specifically, we render the 3D shape into 2D views from multiple different perspectives to represent the shape. Perform feature extraction on two types of inputs through two different networks, and design a similarity weighting module to calculate the scores of each view, so as to obtain the final descriptors. Finally, a final descriptor similarity metric network is trained based on contrastive loss. The experimental results on SHREC’13 dataset demonstrate the superiority and robustness of our method.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.