2012
DOI: 10.1007/978-1-4302-3868-3
|View full text |Cite
|
Sign up to set email alerts
|

Hacking the Kinect

Abstract: This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. Exempted from this legal reservation are brief excer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0
4

Year Published

2013
2013
2018
2018

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 56 publications
(28 citation statements)
references
References 0 publications
0
24
0
4
Order By: Relevance
“…Next, the aim is to generate two surfaces-source A and target B as registration input. B is defined as the skin mesh generated via segmentation from CT data and A is produced as follows: The Kinect depth data are converted to 3D world coordinates according to the pinhole camera model and based on [25]. A straightforward triangulation [23] is enhanced with the following simple procedures to allow a fully automatic real-time segmentation of depth data: Let v ij denote the 3D coordinates (relative to the camera coordinates) of the pixel at index (i, j) ∈ I .…”
Section: Preprocessing and Automatic Depth Data Segmentationmentioning
confidence: 99%
“…Next, the aim is to generate two surfaces-source A and target B as registration input. B is defined as the skin mesh generated via segmentation from CT data and A is produced as follows: The Kinect depth data are converted to 3D world coordinates according to the pinhole camera model and based on [25]. A straightforward triangulation [23] is enhanced with the following simple procedures to allow a fully automatic real-time segmentation of depth data: Let v ij denote the 3D coordinates (relative to the camera coordinates) of the pixel at index (i, j) ∈ I .…”
Section: Preprocessing and Automatic Depth Data Segmentationmentioning
confidence: 99%
“…In 2010 a new depth-sensing technology was developed under the name Kinect (Kramer et al, 2012). It was originally intended for stationary indoor use in combination with a gaming console.…”
Section: Data Collectionmentioning
confidence: 99%
“…It was originally intended for stationary indoor use in combination with a gaming console. The technology has proven to be quite accurate in depth estimates and has become of interest for geophysical research (Kramer et al, 2012;Mankoff and Russo, 2012). Subsequently, similar sensors started to be developed parallel to the development of the Kinect.…”
Section: Data Collectionmentioning
confidence: 99%
“…In order to achieve real-time reconstruction of users over the web, the Kinect sensor's depth and color streams are used to generate a colored 3D point cloud as described in [6], which can then be placed inside the virtual scene. The Kinect depth stream is represented as a 3-channel color video, obtained through the ZigFu browser plugin, where the Least Significant Byte (LSB) is encoded in the red channel …”
Section: Real-time On-line Reconstructionmentioning
confidence: 99%