2016 IEEE International Conference on Robotics and Automation (ICRA) 2016
DOI: 10.1109/icra.2016.7487400
|View full text |Cite
|
Sign up to set email alerts
|

A model-based approach to finding substitute tools in 3D vision data

Abstract: A robot can feasibly be given knowledge of a set of tools for manipulation activities (e.g. hammer, knife, spatula). If the robot then operates outside a closed environment it is likely to face situations where the tool it knows is not available, but alternative unknown tools are present. We tackle the problem of finding the best substitute tool based solely on 3D vision data. Our approach has simple hand-coded models of known tools in terms of superquadrics and relationships among them. Our system attempts to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
59
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 29 publications
(59 citation statements)
references
References 19 publications
0
59
0
Order By: Relevance
“…Regarding representations they can use purely Machine Learning to learn features [8], [6], [1] or combine it with some hand-engineered features (e.g. histograms [2]; model fitting [9]). The grounding of the system concerns whether the approach is based on learning from hand-labelled data [2]; simulation of tasks [6], [10]; or a real robot trying out the tools [11].…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Regarding representations they can use purely Machine Learning to learn features [8], [6], [1] or combine it with some hand-engineered features (e.g. histograms [2]; model fitting [9]). The grounding of the system concerns whether the approach is based on learning from hand-labelled data [2]; simulation of tasks [6], [10]; or a real robot trying out the tools [11].…”
Section: Related Workmentioning
confidence: 99%
“…The grounding of the system concerns whether the approach is based on learning from hand-labelled data [2]; simulation of tasks [6], [10]; or a real robot trying out the tools [11]. As for outputs they can be scores for pixels/regions of an image (RGB, RGB-D) [6], [1] (in what is sometimes called pixel-wise labeling task); or for 'parts' of a point cloud [2], or a score for the object as a whole [9]. The outputs also vary in giving out a binary score (affords or does not afford) [12] or a graded score [1] [2], [11], [13], [14], or additionally providing manipulation cues that the robot could use to grasp [8], [1] and orient the tool.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations