For future 3D TV broadcasting systems and navigation applications, it is necessary to have accurate stereo matching which could precisely estimate depth map from two distanced cameras. In this paper, we first suggest a trinary cross color (TCC) census transform, which can help to achieve accurate disparity raw matching cost with low computational cost. The two-pass cost aggregation (TPCA) is formed to compute the aggregation cost, then the disparity map can be obtained by a range winner-take-all (RWTA) process and a white hole filling procedure. To further enhance the accuracy performance, a range left-right checking (RLRC) method is proposed to classify the results as correct, mismatched, or occluded pixels. Then, the image-based refinements for the mismatched and occluded pixels are proposed to refine the classified errors. Finally, the image-based cross voting and a median filter are employed to complete the fine depth estimation. Experimental results show that the proposed semi-global stereo matching system achieves considerably accurate disparity maps with reasonable computation cost.
With advances in three-dimension television (3DTV) technology, accurate depth information for 3DTV broadcasting has gained much attention recently. The depth map, either retrieved by stereo matching or captured by the RGB-D camera, is mostly with lower resolution and often with noisy or missing values than the texture frame. How to effectively utilise highresolution texture image to enhance the corresponding depth map becomes an important and inevitable approach. In this study, the authors propose texture similarity-based hole filling, texture similarity-based depth enhancement and rotating counsel depth refinement to enhance the depth map. Thus, the proposed depth enhancement system could suppress the noise, fill the holes and sharpen the object edges simultaneously. Experimental results demonstrate that the proposed system provides a superior performance, especially around the object boundary comparing to the state-of-the-art depth enhancement methods.
A texture image plus its associated depth map is the simplest representation of a three-dimensional image and video signals and can be further encoded for effective transmission. Since it contains fewer variations, a depth map can be coded with much lower resolution than a texture image. Furthermore, the resolution of depth capture devices is usually also lower. Thus, a low-resolution depth map with possible noise requires appropriate interpolation to restore it to full resolution and remove noise. In this study, the authors propose potency guided upsampling and adaptive gradient fusion filters to enhance the erroneous depth maps. The proposed depth map enhancement system can successfully suppress noise, fill missing values, sharpen foreground objects, and smooth background regions simultaneously. Their experimental results show that the proposed methods perform better in terms of both visual and subjective metrics than the classic methods and achieve results that are visually comparable with those of some time-consuming methods.
Stereo matching of two distanced cameras and structured-light RGB-D cameras are the two common ways to capture the depth map, which conveys the per-pixel depth information of the image. However, the results with mismatched and occluded pixels would not provide accurately well-matched depth and image information. The mismatched depth-image relations would degrade the performances of view syntheses seriously in modern-day three-dimension video applications. Therefore, how to effectively utilize the image and depth to enhance themselves becomes more and more important. In this paper, we propose an advanced multilateral filter (AMF), which refers spatial, range, depth, and credibility information to achieve their enhancements. The AMF enhancements could sharpen the image, suppress noisy depth, filling depth holes, and sharpen the depth edges simultaneously. Experimental results demonstrate that the proposed method provides a superior performance, especially around the object boundary.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.