We present a novel unified framework for both static and space-time saliency detection. Our method is a bottom-up approach and computes so-called local regression kernels (i.e., local descriptors) from the given image (or a video), which measure the likeness of a pixel (or voxel) to its surroundings. Visual saliency is then computed using the said "self-resemblance" measure. The framework results in a saliency map where each pixel (or voxel) indicates the statistical likelihood of saliency of a feature matrix given its surrounding feature matrices. As a similarity measure, matrix cosine similarity (a generalization of cosine similarity) is employed. State of the art performance is demonstrated on commonly used human eye fixation data (static scenes (N. Bruce & J. Tsotsos, 2006) and dynamic scenes (L. Itti & P. Baldi, 2006)) and some psychological patterns.
Abstract-We present a novel face representation based on locally adaptive regression kernel (LARK) descriptors [1]. Our LARK descriptor measures a self-similarity based on "signal-induced distance" between a center pixel and surrounding pixels in a local neighborhood. By applying principal component analysis (PCA) and a logistic function to LARK consecutively, we develop a new binary-like face representation which achieves state of the art face verification performance on the challenging benchmark "Labeled Faces in the Wild" (LFW) dataset [2]. In the case where training data are available, we employ one-shot similarity (OSS) [3], [4] based on linear discriminant analysis (LDA) [5]. The proposed approach achieves state of the art performance on both the unsupervised setting and the image restrictive training setting (72.23% and 78.90% verification rates) respectively as a single descriptor representation, with no preprocessing step. As opposed to [4] which combined 30 distances to achieve 85.13%, we achieve comparable performance (85.1%) with only 14 distances while significantly reducing computational complexity.
We present a novel bottom-up
We present a novel approach to change detection between two brain MRI scans (reference and target.) The proposed method uses a single modality to find subtle changes; and does not require prior knowledge (learning) of the type of changes to be sought. The method is based on the computation of a local kernel from the reference image, which measures the likeness of a pixel to its surroundings. This kernel is then used as a feature and compared against analogous features from the target image. This comparison is made using cosine similarity. The overall algorithm yields a scalar dissimilarity map (DM), indicating the local statistical likelihood of dissimilarity between the reference and target images. DM values exceeding a threshold then identify meaningful and relevant changes. The proposed method is robust to various challenging conditions including unequal signal strength.
A practical problem addressed recently in computational photography is that of producing a good picture of a poorly lit scene. The consensus approach for solving this problem involves capturing two images and merging them. In particular, using a flash produces one (typically high signal-to-noise ratio [SNR]) image and turning off the flash produces a second (typically low SNR) image. In this article, we present a novel approach for merging two such images. Our method is a generalization of the guided filter approach of He et al., significantly improving its performance. In particular, we analyze the spectral behavior of the guided filter kernel using a matrix formulation, and introduce a novel iterative application of the guided filter. These iterations consist of two parts: a nonlinear anisotropic diffusion of the noisier image, and a nonlinear reaction-diffusion (residual) iteration of the less noisy one. The results of these two processes are combined in an unsupervised manner. We demonstrate that the proposed approach outperforms state-of-the-art methods for both flash/no-flash denoising, and deblurring.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.