2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018
DOI: 10.1109/iros.2018.8594003
|View full text |Cite
|
Sign up to set email alerts
|

A modular framework for model-based visual tracking using edge, texture and depth features

Abstract: We present in this paper a modular real-time model-based visual tracker. It is able to fuse different types of measurement, that is, edge points, textured points, and depth map, provided by one or multiple vision sensors. A confidence index is also proposed for determining if the outputs of the tracker are reliable or not. As expected, experimental results show that the more various measurements are combined, the more accurate and robust is the tracker. The corresponding C++ source code is available for the co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1

Relationship

3
6

Authors

Journals

citations
Cited by 25 publications
(12 citation statements)
references
References 27 publications
0
12
0
Order By: Relevance
“…The rigid registration is a joint minimization of two error terms: depth based geometric error and keypoint based feature tracking [22]. The two error terms are explained below:…”
Section: Rigid Registrationmentioning
confidence: 99%
“…The rigid registration is a joint minimization of two error terms: depth based geometric error and keypoint based feature tracking [22]. The two error terms are explained below:…”
Section: Rigid Registrationmentioning
confidence: 99%
“…Only a few model-free tracking and reconstruction algorithm are publicly available along with their open source code (e.g: [16], [18]), but none of them are directly applicable to model-based object tracking with RGB-D data. We compare the proposed method with two recent approaches: a) 'edge + keypoint + depth tracker'from [25] (denoted in the figures as ViSP), and b) stacked error minimization of point-to-plane distance with photometric error, without using keyframes (denoted as No KF). No KF is close to the tracking module of [18].…”
Section: Resultsmentioning
confidence: 99%
“…However, this approach presents a problem when considering the large range of module designs in circulation. A path planning framework is hence proposed to deal with this limitation in two stages: in the first, the module pose is identified through the use of a model-based tracking algorithm ( Trinh et al, 2018 ), given an approximate module pose is provided as input. In the second stage, point cloud data from an RGBD camera is used to directly define a tool path.…”
Section: Methodsmentioning
confidence: 99%