2010 IEEE International Conference on Image Processing 2010
DOI: 10.1109/icip.2010.5651383
|View full text |Cite
|
Sign up to set email alerts
|

Range unfolding for Time-of-Flight depth cameras

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
27
0

Year Published

2012
2012
2015
2015

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 19 publications
(27 citation statements)
references
References 7 publications
0
27
0
Order By: Relevance
“…We compared the results of our algorithm to that of Choi [17] and Crabb [2]. Over the entire data set, we observed an average of 94.1% pixels labeled correctly from our method, compared to 84.3% for Choi and 89.7% for Crabb.…”
Section: Comparison To Previous Methodsmentioning
confidence: 82%
See 2 more Smart Citations
“…We compared the results of our algorithm to that of Choi [17] and Crabb [2]. Over the entire data set, we observed an average of 94.1% pixels labeled correctly from our method, compared to 84.3% for Choi and 89.7% for Crabb.…”
Section: Comparison To Previous Methodsmentioning
confidence: 82%
“…The first two columns (green) compare the best estimate using the intensity model by itself, without enforcement of spatial coherence. Columns 3-4 intensity (red) show the performance using the Markov random field approach of [2], the first uses an intensity model by Choi [17] and the next uses an intensity model without normal estimation and discontinuity cost considering only . Columns 5-7 (orange) enforce spatial coherence by NLCA, shown with selected variations in distance metric.…”
Section: Algorithm Efficiencymentioning
confidence: 98%
See 1 more Smart Citation
“…[19][20][21][22] The methods using a single depth map rely on the assumption that the scene consists of objects with similar reflectivities 14,16,18 or on the assumption that the depth discontinuities between adjacent pixels coincide with the transitions of the number of wrappings from n to either n + 1 or n − 1. 15,17 Since the methods require only a single depth map, they suffer less from motion artifacts, enabling their potential applications in dynamic situations.…”
Section: Related Workmentioning
confidence: 99%
“…Previous phase unwrapping methods can be classified into two groups according to the number of input depth maps: those using a single depth map [14][15][16][17][18] and those using multi-frequency depth maps. [19][20][21][22] The methods using a single depth map rely on the assumption that the scene consists of objects with similar reflectivities 14,16,18 or on the assumption that the depth discontinuities between adjacent pixels coincide with the transitions of the number of wrappings from n to either n + 1 or n − 1.…”
Section: Related Workmentioning
confidence: 99%