2018
DOI: 10.17533/udea.redin.n86a07
|View full text |Cite
|
Sign up to set email alerts
|

Dense tracking, mapping and scene labeling using a depth camera

Abstract: Reconstrucción densa, localización de la cámara, sensor de profundidad, representación volumétrica, detección de objetos, etiquetamiento de múltiples instancias ABSTRACT: We present a system for dense tracking, 3D reconstruction, and object detection of desktop-like environments, using a depth camera; the Kinect sensor. The camera is moved by hand meanwhile its pose is estimated, and a dense model, with evolving color information of the scene, is constructed. Alternatively, the user can couple the object detec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 31 publications
0
1
0
Order By: Relevance
“…The point clouds acquired with the depth sensor are defined in cartesian coordinates (x, y, z); these coordinates represent the surface of the scanned object or area. Depth-based systems have been used to solve neuro-rehabilitation problems [2,3], video conferencing system [4], facial recognition [5,6], extraction and recognition of human body movements [7][8][9][10], search, localization, and detection of objects [11][12][13][14], navigation [15][16][17], robotics [18][19][20][21], reconstruction [22][23][24][25], modeling of objects or surfaces [26][27][28][29][30][31][32], and plant monitoring [33][34][35][36][37][38][39][40][41][42].…”
Section: Introductionmentioning
confidence: 99%
“…The point clouds acquired with the depth sensor are defined in cartesian coordinates (x, y, z); these coordinates represent the surface of the scanned object or area. Depth-based systems have been used to solve neuro-rehabilitation problems [2,3], video conferencing system [4], facial recognition [5,6], extraction and recognition of human body movements [7][8][9][10], search, localization, and detection of objects [11][12][13][14], navigation [15][16][17], robotics [18][19][20][21], reconstruction [22][23][24][25], modeling of objects or surfaces [26][27][28][29][30][31][32], and plant monitoring [33][34][35][36][37][38][39][40][41][42].…”
Section: Introductionmentioning
confidence: 99%