2014 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC) 2014
DOI: 10.1109/icarsc.2014.6849774
|View full text |Cite
|
Sign up to set email alerts
|

Learning visual affordances of objects and tools through autonomous robot exploration

Abstract: Endowing artificial agents with the ability of predicting the consequences of their own actions and efficiently planning their behaviors based on such predictions is a fundamental challenge both in artificial intelligence and robotics. A computationally practical yet powerful way to model this knowledge, referred as objects affordances, is through probabilistic dependencies between actions, objects and effects: this allows to make inferences across these dependencies, such as i) predicting the effects of an ac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
3
3
2

Relationship

3
5

Authors

Journals

citations
Cited by 22 publications
(14 citation statements)
references
References 17 publications
0
14
0
Order By: Relevance
“…An important step toward solving this problem in a more general way, by considering the properties of both tools and affected objects and modeling how those properties influence the effects of specific actions, has been recently made by Gonçalves et al [221], [222]. In their work, the Bayesian Network (BN) probabilistic model initially proposed in [184] is extended to consider not just actions that are directly applied to an object, but also actions that involve the use of a tool, i.e.…”
Section: Multi-objects Models and Tool Usementioning
confidence: 99%
“…An important step toward solving this problem in a more general way, by considering the properties of both tools and affected objects and modeling how those properties influence the effects of specific actions, has been recently made by Gonçalves et al [221], [222]. In their work, the Bayesian Network (BN) probabilistic model initially proposed in [184] is extended to consider not just actions that are directly applied to an object, but also actions that involve the use of a tool, i.e.…”
Section: Multi-objects Models and Tool Usementioning
confidence: 99%
“…The features that we extract are pre-categorical shape descriptors computed as geometric relationships between perimeter, area, convex hull and approximated shapes of the segmented silhouettes of the objects in front of the robot, and they are based on [32]. Our visual feature extraction is similar to our previous works [19], [20], [22], however it now incorporates a richer set of 13 features instead of 5: convexity, eccentricity, compactness, circularity, squareness, number of convexity defects (i.e., number of cavities along the contour, for example the "holes" between fingers in a hand image), and seven central normalized moments. It is worth noting that we do not classify or label our shapes into classes, but we merely reason about their shape-derived features: this gives our system some flexibility, making it able to process unknown situations not seen during training.…”
Section: Visual Featuresmentioning
confidence: 99%
“…Similar to previous research [12], [19], [20], we assume that the affordance of an object and of a robot manipulator is viewpoint-invariant. By exploiting this notion, it is possible to artificially augment the trials data using multiple views of manipulators and objects.…”
Section: B Data Augmentationmentioning
confidence: 99%
“…Gonçalves et al [16], [17] have extended the Bayesian network formulation of [5] to incorporate the notion of tools. This model was further developed in [18] to allow the robot to generalize the learned affordances of its own hands to unseen tools.…”
Section: Previous Workmentioning
confidence: 99%
“…This model was further developed in [18] to allow the robot to generalize the learned affordances of its own hands to unseen tools. In a scenario similar to [16] we proposed the usage of denoising auto-encoders to learn the affordances of objects and tools in continuous space [19]. Our model could use the continuous nature of features and effects while improving on the accuracy of previous works over different measures of performance.…”
Section: Previous Workmentioning
confidence: 99%