2000
DOI: 10.1016/s0031-3203(99)00059-x
|View full text |Cite
|
Sign up to set email alerts
|

Localizing a polyhedral object in a robot hand by integrating visual and tactile data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…Nevertheless, we are assuming that the vision modality is available at first so that the uncertainty can be efficiently narrowed down at the start. In general, the vision modality makes it possible to detect features globally, whereas tactile sensing typically has only a local scope (see [10] for a comparison of visual and tactile data). Therefore, it is usually better to use the vision modality at first (if available), because a wider field can be observed.…”
Section: Discussionmentioning
confidence: 99%
“…Nevertheless, we are assuming that the vision modality is available at first so that the uncertainty can be efficiently narrowed down at the start. In general, the vision modality makes it possible to detect features globally, whereas tactile sensing typically has only a local scope (see [10] for a comparison of visual and tactile data). Therefore, it is usually better to use the vision modality at first (if available), because a wider field can be observed.…”
Section: Discussionmentioning
confidence: 99%
“…Son et al 47 reported the experimental result about using both visual and tactile information for manipulator grasping and pointed out that with the introduction of tactile information, more precise result can be obtained. Boshra and Zhang 48 realized localization of polyhedron by integrating visual and tactile information.…”
Section: Visual-tactile Fusion For Object Recognitionmentioning
confidence: 99%
“…Significant amount of work has already been done by researcher to develop robot control strategies that would maximise the capability of individual sensing technologies through their strategic integration. (Namiki and Ishikawa, 1996), (Allen et al, 1999), (Boshra and Zhang, 2000), and (Prats et al, 2009) used sensor fusion strategies to integrate vision and tactile sensing to improve grasping. Pelossof et al (2004) used simulation based on SVM (Support Vector Machine) method to find optimum grasps for complex shape objects.…”
Section: Literature Reviewmentioning
confidence: 99%