Pose estimation is an important operation for many robotic tasks such as camera calibration and landmark tracking. In this paper, we propose a new algorithm of pose estimation based on the volume measurement of tetrahedra composed of feature-point triplets extracted from an arba'trmy quadrangular target and the lenscenter of the vision system. This method has been tested using synthetic and real data; it is efficient, accurate, and robust. Its speed, in particular, makes it a potential candidate for real-time robotic tasks. BackgroundSeveral researchers have addressed the problem of self-location using standard marks. The central idea of the standard mark approach is as follows. By observing a single projection of a fixed mark, we are able t o determine the position and orientation of a camera with respect t o a fixed coordinate system. The mark itself is designed such that, when transformed under perspective projection, it yields enough geometric information to recover the relative target position (sometime referred t o as interior orientation parameters), the fixed target position (exterior orientation parameters) and final pose (translation and rotation elements of a transformation matrix relating the target frame to the camera frame).Haralick [1,2] has shown that it is possible t o determine the camera parameters from the observed perspective projection of a 3-D rectangle of known size and unknown orientation and position. The author provided a broad review of the properties and uses of the transformation matrix for several computer vision reconstruction problems. He also showed how the orientation of a planar surface can be recovered by computing the perspective projection of vanishing points from a number of parallel lines lying on that surface. Fischler and Bolles [3] have shown that, knowing the coordinates of a number of 3-D points and their corresponding image points, it is possible t o compute the position and orientation of the camera using a geometric closed-form technique. They also described important results on the conditions under which multiple solutions exist for various numbers of correspondences between image and target, particularly for the Perspective-&Point (P4P) and Perspective-3-Point (PSP) problems. They established that there are up t o four solutions in the case of a threepoint target. Multiple solutions may exist even in the case of four-or five-point targets when these points are unconstrained in space. A unique solution exists for matching four points of known location which are coplanar and noncollinear. The effect of lens distortion was also addressed. Eason et al. [4] and Abidi et al. [5] have formulated the six-.four-, and three-point solutions t o this problem. The three-point solution can be recovered by direct means. The four-point solution is also direct for an unconstrained quadrangle. Both the pose 'parameters and decomposition of the transformation matrix were accomplished simultaneously. No lens distortion was addressed analytically; however, during implementation, the ...
Multi-sensor systems provide a purposeful description of the environment that a single sensor cannot offer. Fusing several types of data enhances the recognition capability of a robotic system and yields more meaningful information otherwise unavailable or difficult to acquire by a single sensory modality. Because observations provided by sensors are uncertain, incomplete, and/or imprecise, we adopted the use of the theory of fuzzy sets as a general framework to combine uncertain measurements. We developed a fusion formula based on the measure of fuzziness. This fusion formula satisfies several desirable properties. We established a fuzzification scheme by which different types of input data (images) are modeled. This process is essential in providing suitable predictions and explainations of a set of observations in a given environment. After fusion, a defuzzification scheme is carried out to recover crisp data from the combined fuzzy assessments. This approach was implemented and tested with real range and intensity images acquired using an Odetics Range Finder. The goal is to obtain better scene descriptions through a segmentation process of both images. Despite the low resolution of the images and the amount of noise present, the segmented output picture is suitable for recognition purposes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.