IntroductionMost segment based stereo methods estimate disparity by modeling color segments as 3-D planes [2]. Inherently, such methods are sensitive to segmentation parameters and intolerant to segmentation errors. Two main dependencies of these methods on the underlying segmentation algorithm are: size of segments used for estimating planes, and assignment of a single plane to the whole segment. Specifically, in the case of under-segmentation, there is a higher chance of merging multiple objects (with multiple plane surfaces) into a single segment. Consequently, planes estimated using these segments are erroneous. The effect propagates to the disparity map, wherein a larger segment encompassing multiple objects is incorrectly represented by a single disparity plane. In the over-segmentation case, which gives smaller color segments, the estimated planes may be unreliable, leading to an inaccurate disparity map. Popular segment based methods try to solve this problem by re-fitting the planes on grouped segments, in an iterative manner [2]. We propose a novel algorithm for generating sub-pixel accurate disparities on a perpixel basis, thus alleviating the problems arising from methods that estimate disparities on a per-segment basis. The proposed method computes sub-pixel precision disparity maps using the recent minimum spanning tree (MST) [4] based cost aggregation framework. Since the disparity at every pixel is modeled by a plane equation, the goal is to ensure that all pixels belonging to a planar surface are labeled with the same plane equation. We show that using a reduced and refined set of planes as candidate labels in the aggregation framework ensures homogeneous labeling within a color segment. Proposed MethodOur method computes an initial set of plane equations (label set) by fitting planes inside a color segment using the consistent disparities from an initial disparity map. The initial disparity map may be generated using any local or global algorithm. These plane equations form the initial label set and a matching cost volume is computed over this set for every pixel. This cost volume is aggregated using MST based cost aggregation framework and a WTA over the aggregated cost volume gives the initial labeling. The number of labels in the initial set is of the order of the number of segments, with a plane estimate for every segment. The initial labeling is used along with the color segmentation to filter and generate a reduced set of planes. This framework of plane filtering followed by re-labeling leads to a more accurate disparity map. In addition, segment analysis is also used to modify the plane matching cost. We weigh the pixel matching cost by a support factor, where the support factor is derived from the distribution of plane labels within the color segment, as follows:
No abstract
We present a hierarchical method for estimating pixel resolution disparity from a raw Plenoptic 2.0 light field capture. Accurate pixel resolution disparity is essential for reconstruction of a high quality conventional image, and also for various applications that depend on disparity, like object segmentation, bokeh etc. Most light field disparity estimation methods in the literature compute disparity at microlens resolution, which is much lower than the resolution of the final reconstructed image. The algorithms that do compute pixel resolution disparity are iterative, making them computationally complex. The proposed method computes disparity hierarchically, in two steps. In the first step, microlens resolution disparity is computed, using which a conventional high resolution image is reconstructed. In the second step, globally smooth and accurate disparity is estimated at a pixel level on the reconstructed image, using the computationally efficient minimum spanning tree based cost aggregation approach. Experimental results demonstrate that the accuracy of the disparity maps generated by our method, in comparison to the Multibaseline and Raytrix algorithms is superior.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.