We present a method that estimates graspability measures on a single depth map for grasping objects randomly placed in a bin. Our method represents a gripper model by using two mask images, one describing a contact region that should be filled by a target object for stable grasping, and the other describing a collision region that should not be filled by other objects to avoid collisions during grasping. The graspability measure is computed by convolving the mask images with binarized depth maps, which are thresholded differently in each region according to the minimum height of the 3D points in the region and the length of the gripper. Our method does not assume any 3-D model of objects, thus applicable to general objects. Our representation of the gripper model using the two mask images is also applicable to general grippers, such as multi-finger and vacuum grippers. We apply our method to bin picking of piled objects using a robot arm and demonstrate fast pick-and-place operations for various industrial objects. IEEE International Conference on Robotics and Automation (ICRA)This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Abstract-We present a method that estimates graspability measures on a single depth map for grasping objects randomly placed in a bin. Our method represents a gripper model by using two mask images, one describing a contact region that should be filled by a target object for stable grasping, and the other describing a collision region that should not be filled by other objects to avoid collisions during grasping. The graspability measure is computed by convolving the mask images with binarized depth maps, which are thresholded differently in each region according to the minimum height of the 3D points in the region and the length of the gripper. Our method does not assume any 3-D model of objects, thus applicable to general objects. Our representation of the gripper model using the two mask images is also applicable to general grippers, such as multi-finger and vacuum grippers. We apply our method to bin picking of piled objects using a robot arm and demonstrate fast pick-and-place operations for various industrial objects.
Bin picking is still a challenge in robotics, as patent in recent robot competitions. These competitions are an excellent platform for technology comparisons since some participants may use state-of-theart technologies, while others may use conventional ones. Nevertheless, even though points are awarded or subtracted based on the performance in the frame of the competition rules, the final score does not directly reflect the suitability of the technology. Therefore, it is difficult to understand which technologies and their combination are optimal for various real-world problems. In this paper, we propose a set of performance metrics selected in terms of actual field use as a solution to clarify the important technologies in bin picking. Moreover, we use the selected metrics to compare our four original robot systems, which achieved the best performance in the Stow task of the Amazon Robotics Challenge 2017. Based on this comparison, we discuss which technologies are ideal for practical use in bin picking robots in the fields of factory and warehouse automation.
In this research, we tackle the problem of picking an object from randomly stacked pile. Since complex physical phenomena of contact among objects and fingers makes it difficult to perform the bin-picking with high success rate, we consider introducing a learning based approach. For the purpose of collecting enough number of training data within a reasonable period of time, we introduce a physics simulator where approximation is used for collision checking. In this paper, we first formulate the learning based robotic bin-picking by using CNN (Convolutional Neural Network). We also obtain the optimum grasping posture of parallel jaw gripper by using CNN. Finally, we show that the effect of approximation introduced in collision checking is relaxed if we use exact 3D model to generate the depth image of the pile as an input to CNN.
Advances are being made in applying digital twin (DT) and human–robot collaboration (HRC) to industrial fields for safe, effective, and flexible manufacturing. Using a DT for human modeling and simulation enables ergonomic assessment during working. In this study, a DT-driven HRC system was developed that measures the motions of a worker and simulates the working progress and physical load based on digital human (DH) technology. The proposed system contains virtual robot, DH, and production management modules that are integrated seamlessly via wireless communication. The virtual robot module contains the robot operating system and enables real-time control of the robot based on simulations in a virtual environment. The DH module measures and simulates the worker’s motion, behavior, and physical load. The production management module performs dynamic scheduling based on the predicted working progress under ergonomic constraints. The proposed system was applied to a parts-picking scenario, and its effectiveness was evaluated in terms of work monitoring, progress prediction, dynamic scheduling, and ergonomic assessment. This study demonstrates a proof-of-concept for introducing DH technology into DT-driven HRC for human-centered production systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.