Numerous efforts have been made to design various low-level saliency cues for RGBD saliency detection, such as color and depth contrast features as well as background and color compactness priors. However, how these low-level saliency cues interact with each other and how they can be effectively incorporated to generate a master saliency map remain challenging problems. In this paper, we design a new convolutional neural network (CNN) to automatically learn the interaction mechanism for RGBD salient object detection. In contrast to existing works, in which raw image pixels are fed directly to the CNN, the proposed method takes advantage of the knowledge obtained in traditional saliency detection by adopting various flexible and interpretable saliency feature vectors as inputs. This guides the CNN to learn a combination of existing features to predict saliency more effectively, which presents a less complex problem than operating on the pixels directly. We then integrate a superpixel-based Laplacian propagation framework with the trained CNN to extract a spatially consistent saliency map by exploiting the intrinsic structure of the input image. Extensive quantitative and qualitative experimental evaluations on three data sets demonstrate that the proposed method consistently outperforms the state-of-the-art methods.
Abstract-According to dichromatic reflection model, the previous methods of specular reflection separation in image processing often separate specular reflection from a single image using patch-based priors. Due to lack of global information, these methods often cannot completely separate the specular component of an image and are incline to degrade image textures. In this paper, we derive a global color-lines constraint from dichromatic reflection model to effectively recover specular and diffuse reflection. Our key observation is from that each image pixel lies along a color line in normalized RGB space and the different color lines representing distinct diffuse chromaticities intersect at one point, namely, the illumination chromaticity. For pixels along the same color line, they spread over the entire image and their distances to the illumination chromaticity reflect the amount of specular reflection components. With global (non-local) information from these color lines, our method can effectively separate specular and diffuse reflection components in a pixelwise way for a single image, and it is suitable for realtime applications. Our experimental results on synthetic and real images show that our method performs better than the state-ofthe-art methods to separate specular reflection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.