2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017
DOI: 10.1109/iros.2017.8206372
|View full text |Cite
|
Sign up to set email alerts
|

Learning how a tool affords by simulating 3D models from the web

Abstract: Abstract-Robots performing everyday tasks such as cooking in a kitchen need to be able to deal with variations in the household tools that may be available. Given a particular task and a set of tools available, the robot needs to be able to assess which would be the best tool for the task, and also where to grasp that tool and how to orient it. This requires an understanding of what is important in a tool for a given task, and how the grasping and orientation relate to performance in the task. A robot can lear… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
21
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 25 publications
(21 citation statements)
references
References 16 publications
0
21
0
Order By: Relevance
“…Tool-part or object-part affordances have also been studied using this approaches based on collections of labeled data; for instance Myers et al ( 2015 ) and Rezapour Lakani et al ( 2018 ) associated local shape and spatial information to affordances such as grasp, scoop, support, pound, and cut. In Abelha and Guerin ( 2017 ), tool-part affordances are used to find substitute tools to accomplish a specified task by matching against a database of manually-defined object-task pairs.…”
Section: Affordance Learning and Perceptionmentioning
confidence: 99%
“…Tool-part or object-part affordances have also been studied using this approaches based on collections of labeled data; for instance Myers et al ( 2015 ) and Rezapour Lakani et al ( 2018 ) associated local shape and spatial information to affordances such as grasp, scoop, support, pound, and cut. In Abelha and Guerin ( 2017 ), tool-part affordances are used to find substitute tools to accomplish a specified task by matching against a database of manually-defined object-task pairs.…”
Section: Affordance Learning and Perceptionmentioning
confidence: 99%
“…In the literature on substitute selection, typically a substitute for a missing tool is determined by means of knowledge about object, and the knowledge-driven similarity between a missing tool prototype and a potential substitute. Such knowledge about objects varies in its contents and form across the literature: metric data about position, orientation, size, and symbolic knowledge about handpicked relations such as similar-to and capable-of extracted from ConceptNet (Bansal et al, 2020); visual and physical understanding of multi-object interactions demonstrated by humans (Xie et al, 2019); matching similarity of shapes of point clouds and materials based on the spectrometer data using dual neural network (Shrivatsav et al, 2019); metric data about size, shape and grasp, as well as a human estimate of an affordance score for task + mass (Abelha and Guerin, 2017); attributes and affordances of objects are hand-coded using a logic-based notation, and a multidimensional conceptual space of features such as shape and color intensity (Mustafa et al, 2016); hand-coded models of known tools in terms of superquadrics and relationships among them (Abelha et al, 2016); potential candidates extracted from WordNet and ConceptNet if they share the same parent with a missing tool for predetermined relations: has-property, capable-of, and used-for (Boteanu et al, 2016); hand-coded object-action relations (Agostini et al, 2015); as well as hand-coded knowledge about inheritance and equivalence relations among objects and affordances (Awaad et al, 2014). While for tool selection, metric data of certain properties are primarily considered, for substitute selection, symbolic knowledge about the object category or class is considered.…”
Section: Introductionmentioning
confidence: 99%
“…Other approaches that can accommodate tools of various shapes usually require prohibitively large amounts of data per task. For example, over 20,000 training examples were needed to learn and generalize in the pushing task ( Xie et al, 2019 ); 18,000 simulated tools were used to generalize tool use in a sweeping and nail-hammering task ( Fang et al, 2020 ); 5,000 vectorized representation tools were used to train a neural network to generalize tool-use in the scraping, cutting, and scooping tasks ( Abelha and Guerin, 2017 ; Gajewski et al, 2019 ). Acquiring a large training sample set is infeasible when tasks need to be learned rapidly or when many tasks need to be learned.…”
Section: Introductionmentioning
confidence: 99%