In this paper, we present a perceptual distortion measure that predicts image integrity far better than meansquared error. This perceptual distortion measure is based on a model of human visual processing that ts empirical measurements of the psychophysics of spatial pattern detection. The model of human visual processing proposed involves two major components: a steerable pyramid transform and contrast normalization. We also illustrate the usefulness of the model in predicting perceptual distortion in real images.
We describe a system that is being used to segment gray matter from magnetic resonance imaging (MRI) and to create connected cortical representations for functional MRI visualization (fMRI). The method exploits knowledge of the anatomy of the cortex and incorporates structural constraints into the segmentation. First, the white matter and cerebral spinal fluid (CSF) regions in the MR volume are segmented using a novel techniques of posterior anisotropic diffusion. Then, the user selects the cortical white matter component of interest, and its structure is verified by checking for cavities and handles. After this, a connected representation of the gray matter is created by a constrained growing-out from the white matter boundary. Because the connectivity is computed, the segmentation can be used as input to several methods of visualizing the spatial pattern of cortical activity within gray matter. In our case, the connected representation of gray matter is used to create a flattened representation of the cortex. Then, fMRI measurements are overlaid on the flattened representation, yielding a representation of the volumetric data within a single image. The software is freely available to the research community.
Extended AbstractEstimating motion between two images plays a vital role in many applications and has drawn a lot of attention during the last two decades. There are many ways to approach this problem and indeed many algorithms have been proposed for this task, e.g. [2, 3, 11. In Barron et. al.[l] a comparative survey of many motion estimation techniques is given. One family of such algorithms which was found to perform well is the family of gradient-based methods, originally proposed by Horn and Schunck [2].The gradient-based methods emerge from the assumption that the intensity value of a physical point in a scene does not change along the image sequence. Denoting the intensity values of the image sequence by the function I(z, y, t ) , where (z, y) is the spatial position and t is the temporal axis, the brightness constancy assumption along the image stream yields [2]:Defining (U", uY) = sequence, we obtain (%, g), as the spatial velocity of each spatio-temporal point in the image I,u" + IYUY + It = 0 .(2) Here I,, Iv and It denote the spatial and temporal derivatives. This Brightness Constancy Equation (BCE), relates the spatial and temporal gradients of an image sequence to the motion vector ( t i z , u y ) at each location (x, y, t ) .One issue that is critical to the implementation of the above BCE is that image derivatives are computed based on sampled information. It is commonly agreed [l, 41 that approximating the spatio-temporal derivatives by finite differences produces error in the above equation and subsequently in the estimated motion. In their comparison study on gradient-based motion estimation, Barron et. [l] conclude that "the method of numerical differentintion is very important -diflerences between first order pixel differencing and higher order central differences are very noticeable" . Several attempts to define or design these derivative filters, together or separately, have been reported in the literature [l, 41. All these methods treat the above question as a problem of optimally designing gradient operators, overlooking the fact that these gradients are to be used for motion estimation. In this paper we first propose a technique to derive a set of filters which are optimal directly with respect to this goal. These filters are designed to give the best estimation results in term of accuracy, where their derivative characteristics are not a demand but a by-product. In our scheme the following requirements are met:'HP
This paper presents a general formulation enabling construction of all functions that are steerable under any transformation group. The method is based on a Lie-group theoretic approach.
A new cascade basis reduction method of computing the optimal least-squares set of basis functions to steer a given function locally is presented. The method combines the Lie group-theoretic and the singular value decomposition approaches such that their respective strengths complement each other. Since the Lie group-theoretic approach i s u s e d , t h e s e t s of basis and steering functions computed can beexpressed in analytic form. Because the singular value decomposition method is used, these sets of basis and steering functions are optimal in the least-squares sense. Most importantly, the computational complexity in designing the basis functions for transformation groups with large numbers of parameters is signi cantly reduced. The e ciency of the cascade basis reduction method is demonstrated by designing a set of basis functions to steer a Gabor function under the four-parameter linear transformation group.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.