2021
DOI: 10.1109/tro.2021.3056043
|View full text |Cite
|
Sign up to set email alerts
|

PoseRBPF: A Rao–Blackwellized Particle Filter for 6-D Object Pose Tracking

Abstract: Tracking 6-D poses of objects from videos provides rich information to a robot in performing different tasks such as manipulation and navigation. In this article, we formulate the 6-D object pose tracking problem in the Rao-Blackwellized particle filtering framework, where the 3-D rotation and the 3-D translation of an object are decoupled. This factorization allows our approach, called PoseRBPF, to efficiently estimate the 3-D translation of an object along with the full distribution over the 3-D rotation. Th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
191
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 145 publications
(191 citation statements)
references
References 65 publications
0
191
0
Order By: Relevance
“…Although this dataset contains real world annotated training data, se(3)-TrackNet does not use any of them but is trained solely on synthetic data generated by aforementioned pipeline. It is compared with other state-of-art 6D object pose detection approaches [12,25,26,30] and 6D pose tracking approaches [4,10,12,29], where publicly available source code 2 is used to evaluate [10,29], while other results are adopted from the respective publications. All the compared tracking methods except PoseRBPF are using ground-truth pose for initialization.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Although this dataset contains real world annotated training data, se(3)-TrackNet does not use any of them but is trained solely on synthetic data generated by aforementioned pipeline. It is compared with other state-of-art 6D object pose detection approaches [12,25,26,30] and 6D pose tracking approaches [4,10,12,29], where publicly available source code 2 is used to evaluate [10,29], while other results are adopted from the respective publications. All the compared tracking methods except PoseRBPF are using ground-truth pose for initialization.…”
Section: Methodsmentioning
confidence: 99%
“…All the compared tracking methods except PoseRBPF are using ground-truth pose for initialization. PoseRBPF [4] is the only one that is initialized using predicted poses from PoseCNN [30]. For fairness, two additional experiments using the same initial pose as PoseRBPF 3 are performed and presented in the rightmost two columns of Table I, one is without any re-initialization, and the other allows re-initialization by PoseCNN twice (same as in PoseRBPF) after heavy occlusions.…”
Section: Methodsmentioning
confidence: 99%
“…As a reminder, it is our superiority that FollowUpAR does not require any prior knowledge of the environment or the target object (e.g., size, shape, color). Previous learning-based solutions require either a pre-trined neural network to recognize the target from the whole image [23,48], or a 3D rigid model of the target [9,25], which is labor-intensive and time-consuming. FollowUpAR, in contrast, leverages the user's tapping on the screen to obtain the rough location of the target object 4 .…”
Section: Target Object Separationmentioning
confidence: 99%
“…The entire experiment lasts around 15 hours, collecting 800,000 video frames as input. We compare FollowU-pAR with three related works, including a classical visual featurematching method (ICP [57]) and two state-of-the-art learning-based solutions (NOCS [46] and PoseRBPF [9]). The experiment results show that FollowUpAR achieves an average rotation accuracy of 2.3 • and a translation accuracy of 2.9…”
Section: Introductionmentioning
confidence: 99%
“…Intelligent robots depend on robust perception strategies to perform their key tasks: autonomous navigation [1,2] and adaptive manipulation [3,4]. State-of-the-art approaches for 6D pose (object position and orientation) estimation [5][6][7][8][9] as well as object tracking [10][11][12] and novel grasping techniques [13][14][15][16], enable adaptive task execution and closed loop control of high precision manipulation tasks.…”
Section: Introductionmentioning
confidence: 99%