2001
DOI: 10.1023/a:1008198321503
|View full text |Cite
|
Sign up to set email alerts
|

Untitled

Abstract: The development of any robotics application relying on visual information always raises the key question of what image features would be most informative about the motion to be performed. In this paper, we address this question in the context of visual robot positioning, where a neural network is used to learn the mapping between image features and robot movements, and global image descriptors are preferred to local geometric features. Using a statistical measure of variable interdependence called Mutual Infor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2005
2005
2015
2015

Publication Types

Select...
4
1
1

Relationship

1
5

Authors

Journals

citations
Cited by 16 publications
(1 citation statement)
references
References 34 publications
0
1
0
Order By: Relevance
“…In our context, depending on how the uncertainty of the output data is reduced, a robot perception gives more or less information about the desired actions. It is worth to highlight that this approach has shown satisfactory results in sensor fusion [76], and vision-based positioning of a robotic arm [163].…”
Section: Classical Approachmentioning
confidence: 86%
“…In our context, depending on how the uncertainty of the output data is reduced, a robot perception gives more or less information about the desired actions. It is worth to highlight that this approach has shown satisfactory results in sensor fusion [76], and vision-based positioning of a robotic arm [163].…”
Section: Classical Approachmentioning
confidence: 86%