Domestic garbage management is an important aspect of a sustainable environment. This paper presents a novel garbage classification and localization system for grasping and placement in the correct recycling bin, integrated on a mobile manipulator. In particular, we first introduce and train a deep neural network (namely, GarbageNet) to detect different recyclable types of garbage. Secondly, we use a grasp localization method to identify a suitable grasp pose to pick the garbage from the ground. Finally, we perform grasping and sorting of the objects by the mobile robot through a whole-body control framework. We experimentally validate the method, both on visual RGB-D data and indoors on a real fullsize mobile manipulator for collection and recycling of garbage items placed on the ground.
With the majority of mobile robot path planning methods being focused on obstacle avoidance, this paper, studies the problem of Navigation Among Movable Obstacles (NAMO) in an unknown environment, with static (i.e., that cannot be moved by a robot) and movable (i.e., that can be moved by a robot) objects. In particular, we focus on a specific instance of the NAMO problem in which the obstacles have to be moved to predefined storage zones. To tackle this problem, we propose an online planning algorithm that allows the robot to reach the desired goal position while detecting movable objects with the objective to push them towards storage zones to shorten the planned path. Moreover, we tackle the challenging problem where an obstacle might block the movability of another one, and thus, a combined displacement plan needs to be applied. To demonstrate the new algorithm's correctness and efficiency, we report experimental results on various challenging path planning scenarios. The presented method has significantly better time performance than the baseline, while also introducing multiple novel functionalities for the NAMO problem.INDEX TERMS Motion and path planning, navigation among movable obstacles, mobile robots.
Multi-stage tasks are a challenge for reinforcement learning methods, and require either specific task knowledge (e.g., task segmentation) or big amount of interaction times to be learned. In this paper, we propose Behavior Policy Learning (BPL) that effectively combines 1) only few solution sketches, that is demonstrations without the actions, but only the states, 2) model-based controllers, and 3) simulations to effectively solve multi-stage tasks without strong knowledge about the underlying task. Our main intuition is that solution sketches alone can provide strong data for learning a high-level trajectory by imitation, and model-based controllers can be used to follow this trajectory (we call it behavior) effectively. Finally, we utilize robotic simulations to further improve the policy and make it robust in a Sim2Real style. We evaluate our method in simulation with a robotic manipulator that has to perform two tasks with variations: 1) grasp a box and place it in a basket, and 2) re-place a book on a different level within a bookcase. We also validate the Sim2Real capabilities of our method by performing real-world experiments and realistic simulated experiments where the objects are tracked through an RGB-D camera for the first task.
In this work, we tackle one-shot visual search of object parts. Given a single reference image of an object with annotated affordance regions, we segment semantically corresponding parts within a target scene. We propose AffCorrs, an unsupervised model that combines the properties of pre-trained DINO-ViT's image descriptors and cyclic correspondences. We use AffCorrs to find corresponding affordances both for intra-and inter-class one-shot part segmentation. This task is more difficult than supervised alternatives, but enables future work such as learning affordances via imitation and assisted teleoperation. Project page with code and dataset: https://sites.google.com/view/affcorrs
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.