The normals of feature points, i.e., the intersection points of multiple smooth surfaces, are ambiguous and undefined. This paper presents a unified definition for point cloud normals of feature and non-feature points, which allows feature points to possess multiple normals. This definition facilitates several succeeding operations, such as feature points extraction and point cloud filtering. We also develop a feature preserving normal estimation method which outputs multiple normals per feature point. The core of the method is a pair consistency voting scheme. All neighbor point pairs vote for the local tangent plane. Each vote takes the fitting residuals of the pair of points and their preliminary normal consistency into consideration. Thus the pairs from the same subspace and relatively far off features dominate the voting. An adaptive strategy is designed to overcome sampling anisotropy. In addition, we introduce an error measure compatible with traditional normal estimators, and present the first benchmark for normal estimation, composed of 152 synthesized data with various features and sampling densities, and 288 real scans with different noise levels. Comprehensive and quantitative experiments show that our method generates faithful feature preserving normals and outperforms previous cutting edge normal estimation methods, including the latest deep learning based method.
Benefiting from global rank constraints, the low-rank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-of-sample classification problem and is less robust to noise. In this paper, a novel online LRR subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the LRR matrix can also be incrementally solved by an efficient online singular value decomposition algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods, including the batch LRR, and significantly outperforms state-of-the-art online methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.