2013
DOI: 10.1049/iet-cvi.2012.0289
|View full text |Cite
|
Sign up to set email alerts
|

Real‐time depth enhancement by fusion for RGB‐D cameras

Abstract: Abstract. This paper presents a real-time refinement procedure for depth data acquired by RGB-D cameras. Data from RGB-D cameras suffers from undesired artifacts such as edge inaccuracies or holes due to occlusions or low object remission. In this work, we use recent depth enhancement filters intended for Time-of-Flight cameras, and extend them to structured light based depth cameras, such as the Kinect camera. Thus, given a depth map and its corresponding 2-D image, we correct the depth measurements by separa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
31
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
1
1

Relationship

2
5

Authors

Journals

citations
Cited by 24 publications
(31 citation statements)
references
References 19 publications
0
31
0
Order By: Relevance
“…Tiny depth variations are mainly located along the low-weighted depth boundaries by the credibility map, where the UML filter behaves as the PWAS filter. We note that the reported SSIM measures are slightly different from those reported in our previous works [13,41]. The reason is that the nonvalid/background depth pixels have not been considered in this evaluation.…”
Section: Quantitative Evaluationmentioning
confidence: 58%
See 2 more Smart Citations
“…Tiny depth variations are mainly located along the low-weighted depth boundaries by the credibility map, where the UML filter behaves as the PWAS filter. We note that the reported SSIM measures are slightly different from those reported in our previous works [13,41]. The reason is that the nonvalid/background depth pixels have not been considered in this evaluation.…”
Section: Quantitative Evaluationmentioning
confidence: 58%
“…In order to achieve a close to real-time performance, we proposed in [41] to not consider the full 3-D colour information but only the pixelwise RGB channel that best describes the 2-D edge. Although the processing time is slightly increased, this UML filter extension that we named RGB-D filter addresses the edge blurring artefact owing to RGB to grayscale transformation.…”
Section: Discussion and Extensionmentioning
confidence: 99%
See 1 more Smart Citation
“…The two pieces of information combined provide more dimensions to model and process, e.g. [9,26,16]. This richer information is desired in several scenarios.…”
Section: Introductionmentioning
confidence: 99%
“…However, its computational complexity is high and its estimation accuracy would be fail in texture-less and occluded regions. Recently, the low-cost structured-light RGB-D cameras have been used to capture high-resolution color images and low-resolution depth maps [10]. Thus, depth map upsampling [11,12] followed by its enhancement [13] becomes an inevitable task because the quality of the DIBR process heavily depends on the accuracy of depth information.…”
Section: Introductionmentioning
confidence: 99%