2011 10th IEEE International Symposium on Mixed and Augmented Reality 2011
DOI: 10.1109/ismar.2011.6162880
|View full text |Cite
|
Sign up to set email alerts
|

KinectFusion: Real-time dense surface mapping and tracking

Abstract: Figure 1: Example output from our system, generated in real-time with a handheld Kinect depth camera and no other sensing infrastructure. Normal maps (colour) and Phong-shaded renderings (greyscale) from our dense reconstruction system are shown. On the left for comparison is an example of the live, incomplete, and noisy data from the Kinect sensor (used as input to our system). ABSTRACTWe present a system for accurate real-time mapping of complex and arbitrary indoor scenes in variable lighting conditions, us… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
869
0
9

Year Published

2014
2014
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 681 publications
(878 citation statements)
references
References 10 publications
0
869
0
9
Order By: Relevance
“…One possible solution to address this limitation is to generate synthetic training data which closely resembles real world scenarios [21]. Object instance recognition is one such potential application where realistic training data can be easily synthesized using 3D object scans (using a Kinect sensor or dense reconstruction [17]) or which are available in large repositories such as the Google 3D warehouse [8]. Instance recognition in the presence of clutter and occlusion has several important applications, particularly in robotics and augmented reality.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…One possible solution to address this limitation is to generate synthetic training data which closely resembles real world scenarios [21]. Object instance recognition is one such potential application where realistic training data can be easily synthesized using 3D object scans (using a Kinect sensor or dense reconstruction [17]) or which are available in large repositories such as the Google 3D warehouse [8]. Instance recognition in the presence of clutter and occlusion has several important applications, particularly in robotics and augmented reality.…”
Section: Introductionmentioning
confidence: 99%
“…However, these sensors are relatively noisy and contain missing depth values, thus making it difficult to extract reliable object shape information. Recent development in 3D reconstruction [17] has made reliable shape information available in real time. In this work, we explore its use as an input for instance recognition (Fig.…”
Section: Introductionmentioning
confidence: 99%
“…In particular, the depth maps do not provide measurements for a number of points (which is very pronounced around the objects edges), as well as the measurement noise produce depth fluctuations and incoherent reading among the neighboring pixels. To improve the depth accuracy of the measurements, several filtering techniques has been applied, e.g., bilateral filter [16], spatio-temporal median filter [29], joint-bilateral filter [30]. These approaches rely on the intensity values of the corresponding pixels, as well as on the past temporal information for the depth maps in recovering the missing depth measurements.…”
Section: Methodsmentioning
confidence: 99%
“…The release of Kinect sensor in 2010 have caused momentous advances in the domain of computer vision, where the provision of both visual (RGB) and depth (D) information of the environment enabled a new spectrum of possibilities and applications [16] [18]. Regarding the robotic observational learning, the abilities of the Kinect sensor for on-line tracking of human motions have been quickly embraced by the research community and utilized in a body of works [19][21].…”
Section: Introductionmentioning
confidence: 99%
“…Remote quantification of the environment can easily be accomplished by imaging in the appropriate sensory regime, such as optical video cameras for quantifying light conditions and thermal cameras for quantifying the thermal landscapes. Methods for quantifying the physical structure of 3D landscapes are rapidly advancing [58][59][60] and can be used for rendering features of natural habitats, such as trees or streams. When combined with behavioral data, this environmental information should allow biologists to represent an animal's cognitive map of its environment, and thus understand the relationship between behavior and fitness [61].…”
Section: Call To Developers: the Ideal Automated Image-based Trackingmentioning
confidence: 99%