Accurate 3D object recognition and 6-DOF pose estimation have been pervasively applied to a variety of applications, such as unmanned warehouse, cooperative robots, and manufacturing industry. How to extract a robust and representative feature from the point clouds is an inevitable and important issue. In this paper, an unsupervised feature learning network is introduced to extract 3D keypoint features from point clouds directly, rather than transforming point clouds to voxel grids or projected RGB images, which saves computational time while preserving the object geometric information as well. Specifically, the proposed network features in a stacked point feature encoder, which can stack the local discriminative features within its neighborhoods to the original point-wise feature counterparts. The main framework consists of both offline training phase and online testing phase. In the offline training phase, the stacked point feature encoder is trained first and then generate feature database of all keypoints, which are sampled from synthetic point clouds of multiple model views. In the online testing phase, each feature extracted from the unknown testing scene is matched among the database by using the K-D tree voting strategy. Afterwards, the matching results are achieved by using the hypothesis & verification strategy. The proposed method is extensively evaluated on four public datasets and the results show that ours deliver comparable or even superior performances than the state-of-the-arts in terms of F1-score, Average of the 3D distance (ADD) and Recognition rate.
In this paper, we develop a novel 3D object recognition algorithm to perform detection and pose estimation jointly. We focus on analyzing the advantages of the 3D point cloud relative to the RGB-D image and try to eliminate the unpredictability of output values that inevitably occurs in regression tasks. To achieve this, we first adopt the Truncated Signed Distance Function (TSDF) to encode the point cloud and extract low compact discriminative feature via unsupervised deep learning network. This approach can not only eliminate the dense scale sampling for offline model training but also reduce the distortion by mapping the 3D shape to the 2D plane and overcome the dependence on color cues. Then, we train a Hough forests to achieve multi-object detection and 6-DoF pose estimation simultaneously. In addition, we propose a robust multilevel verification strategy that effectively reduces the unpredictability of output values which occurs in the hough regression module. Experiments on public datasets demonstrate that our approach provides effective results comparable to the state-of-the-arts.
Comparative sentences in online reviews express substantial information that are of concern to customers and attract increasing attention from manufacturers and service providers. At present studies of Chinese Comparative Sentence Identification are based on Pattern Matching and Supervised Machine Learning Algorithm, in which its performance requires further improvement. Therefore, the present analysis aims to further identify the candidate sets of comparative sentences by unsupervised sentiment analysis, and then improve mining performance. First, we constructed a Chinese comparative pattern set, which was used to extract candidate comparative sentences from the corpus of online products reviews. Moreover, we set the score for the candidate sentences using sentiment analysis technique. The experiment determined the threshold of positive and negative affective means, ranging from 0 to 0.03. Experimental results on Chinese customer reviews show that the final F-score value increased to 87.54%. In addition, a significant difference was set at the 0.01 level, which demonstrates the effectiveness of the technique. The proposed unsupervised method is suitable for the changeable and the large quantity of network review mining. This study does not only meet the need to generalize across different products and various data sizes but also improves the performance in terms of identifying comparative sentences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.