2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020
DOI: 10.1109/iros45743.2020.9341355
|View full text |Cite
|
Sign up to set email alerts
|

Occlusion-Robust MVO: Multimotion Estimation Through Occlusion Via Motion Closure

Abstract: Visual motion estimation is an integral and wellstudied challenge in autonomous navigation. Recent work has focused on addressing multimotion estimation, which is especially challenging in highly dynamic environments. Such environments not only comprise multiple, complex motions but also tend to exhibit significant occlusion. Previous work in object tracking focuses on maintaining the integrity of object tracks but usually relies on specific appearance-based descriptors or constrained motion models. These appr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 31 publications
0
4
0
Order By: Relevance
“…This paper is a continuation of ideas that were first presented at the Joint Industry and Robotics CDTs Symposium (Judd et al, 2018) and the Long-term Human Motion Prediction Workshop (Judd and Gammell, 2019b) and were published in Judd et al (2018a), Judd (2019) and Judd and Gammell (2020). It makes the following specific contributions:• Presents a unified and updated version of MVO that is adaptable to a variety of trajectory representations and estimation techniques.• Incorporates a continuous SE (3) white-noise-on jerk (WNOJ) prior into the estimator, extends it for geocentric third-party estimation, and examines its advantages and limitations in the context of the MEP, along with the previously presented discrete and white-noise-on-acceleration (WNOA) models.• Details the challenges of full-batch and sliding-window implementations of MVO, including handling temporary occlusions.• Compares pose-only, pose-velocity, and pose-velocity-acceleration estimators both quantitatively on indoor experiments (OMD) and qualitatively in the real world (KITTI).…”
Section: Introductionmentioning
confidence: 91%
See 2 more Smart Citations
“…This paper is a continuation of ideas that were first presented at the Joint Industry and Robotics CDTs Symposium (Judd et al, 2018) and the Long-term Human Motion Prediction Workshop (Judd and Gammell, 2019b) and were published in Judd et al (2018a), Judd (2019) and Judd and Gammell (2020). It makes the following specific contributions:• Presents a unified and updated version of MVO that is adaptable to a variety of trajectory representations and estimation techniques.• Incorporates a continuous SE (3) white-noise-on jerk (WNOJ) prior into the estimator, extends it for geocentric third-party estimation, and examines its advantages and limitations in the context of the MEP, along with the previously presented discrete and white-noise-on-acceleration (WNOA) models.• Details the challenges of full-batch and sliding-window implementations of MVO, including handling temporary occlusions.• Compares pose-only, pose-velocity, and pose-velocity-acceleration estimators both quantitatively on indoor experiments (OMD) and qualitatively in the real world (KITTI).…”
Section: Introductionmentioning
confidence: 91%
“…Egomotion estimation techniques must be adapted to estimate the geocentric trajectories of dynamic objects that invalidate the static tracklet assumption. Initial versions of the ideas in Sections 4.2.1 and 4.2.2 were first published in Judd et al (2018a) and Judd and Gammell (2020), respectively, and they are further developed here. The ideas in Section 4.2.3 are presented here for the first time.…”
Section: Batch Se(3) Estimationmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, multi-object visual odometry techniques [12], [13] and graph-based optimisation Dynamic SLAM systems [7], [14], [15] have been explored to jointly localise This research is funded with the support of ARIA Research and the Australian Government via the Department of Industry, Science, and Resources CRC-P program (CRCPXI000007).…”
Section: Introductionmentioning
confidence: 99%