2020
DOI: 10.1109/tro.2020.2988642
|View full text |Cite
|
Sign up to set email alerts
|

TossingBot: Learning to Throw Arbitrary Objects With Residual Physics

Abstract: We investigate whether a robot arm can learn to pick and throw arbitrary rigid objects into selected boxes quickly and accurately. Throwing has the potential to increase the physical reachability and picking speed of a robot arm. However, precisely throwing arbitrary objects in unstructured settings presents many challenges: from acquiring objects in grasps suitable for reliable throwing, to handling varying object-centric properties (e.g., mass distribution, friction, shape) and complex aerodynamics. In this … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
76
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 189 publications
(77 citation statements)
references
References 28 publications
0
76
0
1
Order By: Relevance
“…In this case, FCNs were used to model affordance based policies. More recently Zeng et al also made use of FCNs to encode affordance based policies to learn complex behaviours such as picking and throwing objects through the interaction with the environment [ 35 ]. In spite of the fact that these FCN based models have shown to be capable to learn object affordances, they can not cope with non-euclidean data such as point clouds.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In this case, FCNs were used to model affordance based policies. More recently Zeng et al also made use of FCNs to encode affordance based policies to learn complex behaviours such as picking and throwing objects through the interaction with the environment [ 35 ]. In spite of the fact that these FCN based models have shown to be capable to learn object affordances, they can not cope with non-euclidean data such as point clouds.…”
Section: Literature Reviewmentioning
confidence: 99%
“…The GSSP approach can be used for any manipulation where the actual outcomes of several practices can be used as the desired outcomes, i.e., the actual outcome can substitute the desired outcome for training. Taking throwing objects into bins as an example [46], we can practice throwing with the robot using unseen objects, record the outcome, and apply GSSP to generalize the learning model to new objects. Another example can be recording the state of food ingredients [47], or a state change [48] after the execution of a manipulation, record the action sequences, use them as training samples, and fine-tune the manipulation model to expand its generalization.…”
Section: Generalization By Self-supervised Practicingmentioning
confidence: 99%
“…However, the framework has only been evaluated in simulation. Zeng et al [280] have proposed end-toend formulation that jointly learns to infer control parameters for grasping and throwing motion primitives from visual observations (RGB-D images of arbitrary objects in a bin) through trial-and-error. In another work,, framework of leveraging the advantages of active perception has been presented in [281] to perform manipulation tasks.…”
Section: Vision-based Robotic Graspmentioning
confidence: 99%