Abstract-Image-based rendering takes as input multiple images of an object and generates photorealistic images from novel viewpoints. This approach avoids explicitly modeling scenes by replacing the modeling phase with an object reconstruction phase. Reconstruction is achieved in two possible ways: recovering 3D point locations using multiview stereo techniques, or reasoning about consistency of each voxel in a discretized object volume space. The most challenging problem for image-based reconstruction is the presence of occlusions. Occlusions make reconstruction ambiguous for object parts not visible in any input image. These parts must be reconstructed in a visually acceptable way. This paper both reviews image inpainting and argues that inpainting can provide not only attractive reconstruction but as well a framework increasing the accuracy of depth recovery.Digital image inpainting refers to any methods that fillin holes of arbitrary topology in images so that they seem to be part of the original image. Available methods are broadly classified as structural inpainting or textural inpainting. Structural inpainting reconstructs using prior assumptions and boundary conditions, while textural inpainting only considers available data from texture exemplars or other templates. Of particular interest is research of structural inpainting applied to 3D models, emphasizing its effectiveness for disocclusion.
Search by Object Model -finding an object inside a target image -is a desirable and yet difficult mechanism for querying multimedia data. An added difficulty is that objects can be photographed under different lighting conditions. While human vision has color constancy, an invariant processing, presumably, here we seek only covariant processing and look to recover such lighting change. Making use of feature-consistent locales in an image we develop a scene partition by localization, rather than by image segmentation. A diagonal model for illumination change and a voting scheme in chromaticity space provide a candidate set of lighting change coefficients for covariant image transformation. For each pair of coefficients, Elastic Correlation, a form of correlation of locale colors, is performed along with a Least Squares minimization for pose estimation. Since the rotation, scale and translation parameters are thus estimated, we can apply an efficient process of texture support and shape verification. Tests on an image and video database of about 1,500 images show an average Recall and Precision of over 70%.
Color object recognition methods that are based on image retrieval algorithms can handle changes of illumination via image normalization, e.g. simple color-channel-normalization 1 or by forming a doubly-stochastic image matrix. 2However these methods fail if the object sought is surrounded by clutter. Rather than directly trying to nd the target, a viable approach i s t o g r o w a small number of feature regions called locales. 3 These are de ned as a nondisjoint coarse localization based on image tiles. In this paper, locales are grown based on chromaticity, which i s more insensitive to illumination c hange than is color. Using a diagonal model of illumination change, a least-squares optimization on chromaticity recovers the best set of diagonal coe cients for candidate assignments from model to test locales stored in a database. If locale centroids are also stored then, adapting a displacement model to include model locale weights, transformed pose and scale can be recovered. Tests on databases of real images show promising results for object query.
Recognizing that conspicuous multiple sclerosis (MS) lesions have high intensities in both dual-echo T2 and PDweighted MR brain images, we show that it is possible to automatically determine a thresholding mechanism to locate conspicuous lesion pixels and also to identify pixels that suffer from reduced intensity due to partial volume effects. To do so, we first transform a T2-PD feature space via a log(T2)-log(T2+PD) remapping. In the feature space, we note that each MR slice, and in fact the whole brain, is approximately transformed into a line structure. Pixels high in both T2 and PD, corresponding to candidate conspicuous lesion pixels, also fall near this line. Therefore we first preprocess images to achieve RF-correction, isolation of the brain, and rescaling of image pixels into the range 0-255. Then, following remapping to log space, we find the main linear structure in feature space using a robust estimator that discounts outliers. We first extract the larger conspicuous lesions which do not show partial volume effects by performing a second robust regression for 1D distances along the line. The robust estimator concomitantly produces a threshold for outliers, which we identify with conspicuous lesion pixels in the high region. Finally, we perform a third regression on the conspicuous lesion pixels alone, producing a 2D conspicuous lesion line and confidence interval band. This band can be projected back into the adjacent, non-conspicuous, region to identify tissue pixels which have been subjected to the partial volume effect.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.