A CAD-Based Grasp Synthesis system has been developed which pre-computes valid, stable grasp sites on CAD models for use by a robot system. Extended Gaussian images, an efficient data structure for mapping surfaces into Gaussian space based on surface normals, are shown. A modified version is introduced which allows the mapping of polygon, edge, and vertex normals. Using the modified extended Gaussian image, pairs of parallel surfaces are found in linear time. Pairs of graspable surfaces include polygonpolygon, polygon-edge, and polygon-point . A new method of computing the rotational stability of edge contacts is given. The stability of each grasp point and its orientation is computed, and all grasps are ranked in descending order of stability. This grasp synthesis program is demonstrated on three models, one concave and two convex; all three types of grasps are shown.
The ability to automatically locate objects using vision is a key technology for flexible, intelligent robotic operations. The vision task is facilitated by placing optical targets or markings in advance on the objects to be located. A number of researchers have advocated the use ofcircular target features as the features that can be most accurately located. This paper describes extensive analysis on circle ceniroid cury using both simulations and laboratory measurements. The work was part of an effort to design a Video Positioning Sensor for NASA's Flight Telerobotic Servicer that would meet accuracy requirements. We have analyzed the main contributors to centroid error and have classified them into the following: (1) spatial quantization errors, (2) errors due to signal noise and random timing errors, (3) surface tilt errors, and (4) errors in modeling camera geometry. It is possible to compensate for the errors in (3) given an estimate of the tilt angle, and the errors from (4) by calibrating the intrinsic camera attributes. The errors in (1) and (2) cannot be compensated for, but they can be measured and their effects reduced somewhat. To charterize these error sources, we measured centroid repeatability under various conditions, including synchronization method, signal-to-noise ratio, and frequency attenuation. Although these results are specific to our video system and equipment, they provide a reference point that should be a characteristic of typical CCD cameras and digitization equipment.
One of the fundamental difficulties that arises when attempting to use computer vision in dynamic environments is that camera calibration coefficients must be adjusted as the relative distances between camera and target object change, causing refocussing to occur. Such situations arise frequently in robotic environments in which the visual sensor is mobile or the target objects are in motion.This paper presents a method for computing camera calibration coefficients for cases in which it is known that the relative motion between camera and target object is a translation along the optical axis, as in cases for which the camera is moving directly toward or away from an object of interest.The calibration technique is straightforward, involving only the solution of linear equations. It is demonstrated that, within the context of a spatial reasoning system, inclusion of the calibration method can improve the relative accuracy of spatial inferences by one to two orders of magnitude.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.