2021 IEEE International Conference on Robotics and Automation (ICRA) 2021
DOI: 10.1109/icra48506.2021.9561222
|View full text |Cite
|
Sign up to set email alerts
|

Precise Multi-Modal In-Hand Pose Estimation using Low-Precision Sensors for Robotic Assembly

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
15
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 28 publications
(17 citation statements)
references
References 40 publications
1
15
0
1
Order By: Relevance
“…The first two actions (Touch and Look) are described in [5], [6]. The Look action requires either a calibrated camera or a calibration geometry in the image, while the Touch action requires the position of at least one calibrated support surface (and optionally one edge) in the environment.…”
Section: Proposed Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…The first two actions (Touch and Look) are described in [5], [6]. The Look action requires either a calibrated camera or a calibration geometry in the image, while the Touch action requires the position of at least one calibrated support surface (and optionally one edge) in the environment.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…Each action reduces the pose uncertainty in different ways. The calculation of the Touch and Look action is described in our previous work [5], [6]. For the three extrinsic manipulations, we assume that the pose distribution after the action can be approximated by the representation in §III-A.…”
Section: B Calculating Pose Uncertaintymentioning
confidence: 99%
See 1 more Smart Citation
“…Heterogeneous sensor modalities hold the promise of providing more informative feedback for solving manipulation tasks than uni-modal approaches. Specifically, visual and tactile data [13] have been utilized in object cluttering [14], grasp assessment [15] [16], and pose detection [17]. Beside the visual and tactile data fusion, the compound concerning visual and force/torque feedbacks is also under study.…”
Section: A Multi-modal Sensing In Manipulationmentioning
confidence: 99%
“…The position of the collision is used to determine the pose using a particle filter. This approach has also been extended with a visual check [15]. However, this approach is not feasible for our system as the collisions with the table can bend the pins.…”
Section: A Pose Estimationmentioning
confidence: 99%