2014 IEEE International Conference on Robotics and Automation (ICRA) 2014
DOI: 10.1109/icra.2014.6907804
|View full text |Cite
|
Sign up to set email alerts
|

Bimanual compliant tactile exploration for grasping unknown objects

Abstract: Abstract-Humans have an incredible capacity to learn properties of objects by pure tactile exploration with their two hands. With robots moving into human-centred environment, tactile exploration becomes more and more important as vision may be occluded easily by obstacles or fail because of different illumination conditions. In this paper, we present our first results on bimanual compliant tactile exploration, with the goal to identify objects and grasp them. An exploration strategy is proposed to guide the m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
27
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 43 publications
(27 citation statements)
references
References 18 publications
0
27
0
Order By: Relevance
“…4(e)-(h), when the training data points are sampled from the whole object point cloud, the variance or shape uncertainty is generally very small on the whole surface except the parts with sparse or with even no data points, such as the bottom of the jug. In robotic grasping tasks, due to the occlusion [48,49] or non-reachability from tactile exploration [21], it is usually the case that some parts of the object are not perceivable and point-clouds exhibit holes. To evaluate our method under missing data points, we use MeshLab 3 to simulate partial view of point clouds with a fixed camera view, and then obtain object point cloud from that virtual camera.…”
Section: Results For Object Surface Modelingmentioning
confidence: 99%
See 3 more Smart Citations
“…4(e)-(h), when the training data points are sampled from the whole object point cloud, the variance or shape uncertainty is generally very small on the whole surface except the parts with sparse or with even no data points, such as the bottom of the jug. In robotic grasping tasks, due to the occlusion [48,49] or non-reachability from tactile exploration [21], it is usually the case that some parts of the object are not perceivable and point-clouds exhibit holes. To evaluate our method under missing data points, we use MeshLab 3 to simulate partial view of point clouds with a fixed camera view, and then obtain object point cloud from that virtual camera.…”
Section: Results For Object Surface Modelingmentioning
confidence: 99%
“…The object point clouds are obtained from laser scanner and 1000 data points are randomly sampled from the original point cloud. To speed up the object shape modeling procedure, we further adopt a GP-based filter to select the most informative data points [21] for representing the GP. The filtered data points for GP are shown as spheres on the object surface, Fig.…”
Section: Results For Object Surface Modelingmentioning
confidence: 99%
See 2 more Smart Citations
“…We also developed an algorithm for bi-manual haptic exploration of objects [13], using multiple phalanxes per finger, covered with tactile sensors, for the 3-d reconstruction of the objects' shapes. However, these approaches were limited in the range of object's shapes that could be explored.…”
Section: Tactile Explorationmentioning
confidence: 99%