Proceedings. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Appl
DOI: 10.1109/iros.1998.724860
|View full text |Cite
|
Sign up to set email alerts
|

Real-time pose estimation of an object manipulated by multi-fingered hand using 3D stereo vision and tactile sensing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(7 citation statements)
references
References 5 publications
0
7
0
Order By: Relevance
“…Estimation of an object's pose combining stereo vision and a force-torque sensor mounted on the wrist of a robot was reported by Hebert et al [7], who also used the joint position to estimate the location of the fingers with respect to the object's faces. Honda et al [8] used a combination of tactile and vision sensing to estimate an object's pose, assuming that the object is composed of plain and quadratic surfaces. This paper extends the authors' previous work in [9], where only the distance between fingers and object was considered 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) November 3-7, 2013.…”
Section: Introductionmentioning
confidence: 99%
“…Estimation of an object's pose combining stereo vision and a force-torque sensor mounted on the wrist of a robot was reported by Hebert et al [7], who also used the joint position to estimate the location of the fingers with respect to the object's faces. Honda et al [8] used a combination of tactile and vision sensing to estimate an object's pose, assuming that the object is composed of plain and quadratic surfaces. This paper extends the authors' previous work in [9], where only the distance between fingers and object was considered 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) November 3-7, 2013.…”
Section: Introductionmentioning
confidence: 99%
“…Based on the assumption that visually similar surfaces are likely to have similar haptic properties, vision is used to create dense haptic maps efficiently across visible surfaces with sparse haptic labels in [168]. Vision can also provide an approximate initial estimate of the object pose that is then refined by tactile sensing using local [173], [174] or global optimization [170].…”
Section: Contact Pointsmentioning
confidence: 99%
“…Visuo-tactile methods have attained attributes such as elasticity, mass and relational constraints [10] and object pose [11], [12]. Shape has been determined with methods such as tactile glances at discrete points on the object [13], visual and tactile feedback from grasping [14], combining visual and tactile exploratory procedures [15] and visuo-tactile fusion [16].…”
Section: Background and Related Workmentioning
confidence: 99%