Abstract. We introduce a fast, robust and accurate Hough Transform (HT) based algorithm for detecting spherical structures in 3D point clouds. To our knowledge, our algorithm is the first HT based implementation that detects spherical structures in typical in 3D point clouds generated by consumer depth sensors such as the Microsoft Kinect. Our approach has been designed to be computationally efficient; reducing an established limitation of HT based approaches. We provide experimental analysis of the achieved results, showing a robust performance against occlusion, and we show superior performance to the only other HT based algorithm for detecting spheres in point clouds available in literature.
We present a new approach to extracting moving spheres from a sequence of 3D point clouds. The new 3D velocity Hough Transform (3DVHT) incorporates motion parameters in addition to structural parameters in an evidence gathering process to accurately detect moving spheres at any given point cloud from the sequence. We demonstrate its capability to detect spheres which are obscured within the sequence of point clouds, which conventional approaches cannot achieve. We apply our algorithm on real and synthetic data and demonstrate the ability of detecting fully occluded spheres by exploiting inter-frame correlation within the 3D point cloud sequence.
Abstract-Much progress has been made recently in the development of 3D acquisition methods and technologies, which increased the availability of low-cost 3D sensors, such as the Microsoft Kinect. This promotes a wide variety of computer vision applications needing object recognition and 3D shape retrieval. We present a novel algorithm for full 3D reconstruction of unknown moving objects in 2.5D point cloud sequences, such as those generated by 3D sensors. Our algorithm incorporates structural and temporal motion information to build 3D models of moving objects and is based on motion compensated temporal accumulation. Unlike other 3D reconstruction methods, the proposed algorithm does not require ICP refinement, keypoint detection, feature description, correspondence matching, provided object models or any geometric information about the object. Given only a fixed centre or axis of rotation, the algorithm integrally estimates the best rigid transformation parameters for registration, applies surface resampling, reduces noise and estimates the optimum angular velocity of the moving object.
Abstract-In this paper we introduce an algorithm for 3D motion estimation in point clouds that is based on Chasles' kinematic theorem. The proposed algorithm estimates 3D motion parameters directly from the data by exploiting the geometry of rigid transformation using an evidence gathering technique in a Hough-voting-like approach. The algorithm provides an alternative to the feature description and matching pipelines commonly used by numerous 3D object recognition and registration algorithms, as it does not involve keypoint detection and feature descriptor computation and matching. To the best of our knowledge, this is the first research to use kinematics theorems in an evidence gathering framework for motion estimation and surface matching without the use of any given correspondences. Moreover, we propose a method for voting for 3D motion parameters using a one-dimensional accumulator space, which enables voting for motion parameters more efficiently than other methods that use up to 7-dimensional accumulator spaces.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.