2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization &Amp; Transmission 2012
DOI: 10.1109/3dimpvt.2012.84
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Kinect Sensor Noise for Improved 3D Reconstruction and Tracking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
202
0
2

Year Published

2013
2013
2021
2021

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 287 publications
(209 citation statements)
references
References 8 publications
5
202
0
2
Order By: Relevance
“…The two error models we use aim to simulate the factory-calibrated data produced by PrimeSense sensors and idealized data with no low-frequency distortion. To produce the idealized data, we process the perfect synthetic depth images using the quantization model described by Konolige and Mihelich [13] and introduce sensor noise following the model of Nguyen et al [18]. To produce the simulated factorycalibrated data, we add a model of low-frequency distortion estimated on a real PrimeSense sensor using the calibration approach of Teichman et al [31].…”
Section: Methodsmentioning
confidence: 99%
“…The two error models we use aim to simulate the factory-calibrated data produced by PrimeSense sensors and idealized data with no low-frequency distortion. To produce the idealized data, we process the perfect synthetic depth images using the quantization model described by Konolige and Mihelich [13] and introduce sensor noise following the model of Nguyen et al [18]. To produce the simulated factorycalibrated data, we add a model of low-frequency distortion estimated on a real PrimeSense sensor using the calibration approach of Teichman et al [31].…”
Section: Methodsmentioning
confidence: 99%
“…The first data set corresponds to flat walls in order to obtain axial errors and the second corresponds to a checkerboard block grid in order to obtain axial errors. As is assumed in most other depth error models ( [72], [80], etc. ), the error from each pseudo-measurement transformed from a depth image is represented as a unique multivariate Gaussian distribution.…”
Section: Depth Image Error Modelsmentioning
confidence: 99%
“…In Figure 3.2(C), the hypersurface fits are plotted on the left for the sensor positioned in front of the wall at a depth of 1600 mm, 2400 mm, and 3200 mm. On the right, the standard depth errors are scatter plotted as a function of radial distance from the center of the focal plane, along with the Menna [72], Nguyen [80], and recalibrated Choo models. The Menna and Nguyen models clearly misrepresent the standard depth errors by assuming a constant, linear model for depth estimates across the focal plane.…”
Section: Formulating the Axial Error Modelmentioning
confidence: 99%
See 2 more Smart Citations