2021 20th International Conference on Advanced Robotics (ICAR) 2021
DOI: 10.1109/icar53236.2021.9659324
|View full text |Cite
|
Sign up to set email alerts
|

Direct 3D model-based tracking in omnidirectional images robust to large inter-frame motion

Abstract: This paper tackles direct 3D model-based pose tracking. It considers the Photometric Gaussian Mixtures (PGM) transform of omnidirectional images as direct features. The contributions include an adaptation of the pose optimization to omnidirectional cameras and a rethink of the initialization and optimization rules of the PGM extent. These enhancements produce a giant leap in the convergence domain width. Application to images acquired onboard a mobile robot within an urban environment described by a large 3D c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…More recently, transforming the images as dense Photometric Gaussian Mixtures could enlarge significantly the convergence domain of the direct alignment of hemispherical images to dense colored 3D point clouds [16] but the computation complexity currently confines that visual feature to offline alignment, thus preventing its use in real-time VS.…”
Section: B Related Workmentioning
confidence: 99%
“…More recently, transforming the images as dense Photometric Gaussian Mixtures could enlarge significantly the convergence domain of the direct alignment of hemispherical images to dense colored 3D point clouds [16] but the computation complexity currently confines that visual feature to offline alignment, thus preventing its use in real-time VS.…”
Section: B Related Workmentioning
confidence: 99%
“…These (CAD or point cloud) models could be used in conjunction with spherical photogrammetry (especially in areas that the wheelchair user has never visited before) to underpin real-time vision-based motion estimation algorithms. A first step in this direction has been taken in [29], where a new panoramic 3D pose-tracking algorithm has been shown to provide accurate estimates, even in the presence of large interframe motions (several meters). The algorithm relies on a representation of catadioptric images as a mixture of photometric potentials, similar to the one used for the direct visual gyroscope in [22].…”
Section: Technical Challengesmentioning
confidence: 99%
“…The algorithm relies on a representation of catadioptric images as a mixture of photometric potentials, similar to the one used for the direct visual gyroscope in [22]. In future work, we plan to adapt the approach in [29] to dualfisheye images to have the largest possible number of algorithms working with the same hardware onboard the wheelchair.…”
Section: Technical Challengesmentioning
confidence: 99%
“…If a trackable object is not moving and its position known w.r.t. the environment, the dense tracker can also be used to localize the robot [14]. This allows extending the closed-loop task-space mc_rtc controller [15], [16] to whole-body loco-manipulation tasks.…”
Section: Introductionmentioning
confidence: 99%