We present a new approach to transfer grasp configurations from prior example objects to novel objects. We assume the novel and example objects have the same topology and similar shapes. We perform 3D segmentation on these objects using geometric and semantic shape characteristics. We compute a grasp space for each part of the example object using active learning. We build bijective contact mapping between these model parts and compute the corresponding grasps for novel objects. Finally, we assemble the individual parts and use local replanning to adjust grasp configurations while maintaining its stability and physical constraints. Our approach is general, can handle all kind of objects represented using mesh or point cloud and a variety of robotic hands.
We present a realtime virtual grasping algorithm to model interactions with virtual objects. Our approach is designed for multi-fingered hands and makes no assumptions about the motion of the user's hand or the virtual objects. Given a model of the virtual hand, we use machine learning and particle swarm optimization to automatically pre-compute stable grasp configurations for that object. The learning pre-computation step is accelerated using GPU parallelization. At runtime, we rely on the pre-computed stable grasp configurations, and dynamics/non-penetration constraints along with motion planning techniques to compute plausible looking grasps. In practice, our realtime algorithm can perform virtual grasping operations in less than 20ms for complex virtual objects, including high genus objects with holes. We have integrated our grasping algorithm with Oculus Rift HMD and Leap Motion controller and evaluated its performance for different tasks corresponding to grabbing virtual objects and placing them at arbitrary locations. Our user evaluation suggests that our virtual grasping algorithm can increase the user's realism and participation in these tasks and offers considerable benefits over prior interaction algorithms, such as pinch grasping and raycast picking.
We present an algorithm for computing the global penetration depth between an articulated model and an obstacle or between the distinctive links of an articulated model. In so doing, we use a formulation of penetration depth derived in configuration space. We first compute an approximation of the boundary of the obstacle regions using a support vector machine in a learning stage. Then, we employ a nearest neighbor search to perform a runtime query for penetration depth. The computational complexity of the runtime query depends on the number of support vectors, and its computational time varies from 0.03 to 3 milliseconds in our benchmarks. We can guarantee that the configuration realizing the penetration depth is penetration free, and the algorithm can handle general articulated models. We tested our algorithm in robot motion planning and grasping simulations using many high degree of freedom (DOF) articulated models. Our algorithm is the first to efficiently compute global penetration depth for high-DOF articulated models.
This article proposes a novel method for one‐class classification based on a divide‐and‐conquer strategy to improve the one‐class support vector machine (SVM). The idea is to build a piecewise linear separation boundary in the feature space to separate the data points from the origin, which is expected to have a more compact region in the input space. For the purpose, the input space of the dataset is first divided into a group of partitions by using a partitioning mechanism of top s% winner‐take‐all autoencoder. A gated linear network is designed to implement a group of linear classifiers for each partition, in which the gate signals are generated from the autoencoder. By applying a one‐class SVM (OCSVM) formulation to optimize the parameter set of the gated linear network, the one‐class classifier is implemented in an exactly same way as a standard OCSVM with a quasi‐linear kernel composed using a base kernel with the gate signals. The proposed one‐class classification method is applied to different real‐world datasets, and simulation results show that it shows a better performance than a traditional OCSVM. © 2018 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.