2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2017
DOI: 10.1109/iros.2017.8206061
|View full text |Cite
|
Sign up to set email alerts
|

Improved object pose estimation via deep pre-touch sensing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 24 publications
0
14
0
Order By: Relevance
“…The latter may result in imprecise estimation of object shape and pose with severe impacts on the grasping process. Although in-hand cameras are preferred to the more conventional head-mounted ones, there are still some limitations in the classical calibration procedures that result not so precise even though the distance between the sensor and the object is reduced, as demonstrated in [3].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…The latter may result in imprecise estimation of object shape and pose with severe impacts on the grasping process. Although in-hand cameras are preferred to the more conventional head-mounted ones, there are still some limitations in the classical calibration procedures that result not so precise even though the distance between the sensor and the object is reduced, as demonstrated in [3].…”
Section: Introductionmentioning
confidence: 99%
“…Among the optical pre-touch sensors, Time-of-Flight (ToF) technology represents the solution to overcome the classical optical sensors issues. As shown in [3,12,13], time-offlight sensors do not need calibration and they are robust and accurate enough with a wide range of materials. In particular, Lancaster et al [3] successfully combine depth camera-based point cloud with pre-touch sensor information to improve the accuracy of the objects pose estimation, while Yang et al [12] used similar time-of-flight-based sensors to improve the accuracy of the grasping actions in sequential manipulation tasks during the Rubik's Cube solving.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The researchers realized grasping of tissue paper, a difficult task with only vision/depth and tactile sensor feedback. In [128], Lancaster et al study the use of deep learning to guide a proximitybased exploration strategy. It yields an improved object model as well as an improved estimate of the object's pose.…”
Section: Higher Complexity Methods and Behaviors (Towards Bt-ii)mentioning
confidence: 99%
“…The active perception principle has been studied in many robotics research papers, as discussed in a survey by Bohg et al [139]. Machine learning is essential in active perception for tasks such as object pose estimation or scene labeling, which can be based on proximity perception, e. g. [128]. Currently, a trend in active perception is the combined perception of vision and touch.…”
Section: B Cognitive Roboticsmentioning
confidence: 99%