Moving object detection is a crucial and critical task for any surveillance system. Conventionally, a moving object detection task is performed on the basis of consecutive frame difference or background models which are based on some mathematical aspects or probabilistic approaches. But, these approaches are based on some initial conditions and short amount of time is needed to learn all these models. Also, the bottleneck in all these previous approaches is that they require neat and clean background or need to create a background first by using some approaches and that it is essential to update them regularly to cope with the illuminating changes. In this paper, moving object detection is executed using visual attention where there is no need for background formulation and updates as it is background independent. Many bottom-up approaches and one combination of bottom-up and top-down approaches are proposed in the present paper. The proposed approaches seem more efficient due to inessential requirement of learning background model and due to being independent of previous video frames. Results indicate that the proposed approach works even against slight movements in the background and in various outdoor conditions.
We present a new, robust and computationally efficient method for estimating the probability density of the intensity values in an image. Our approach makes use of a continuous representation of the image and develops a relation between probability density at a particular intensity value and image gradients along the level sets at that value. Unlike traditional sample-based methods such as histograms, minimum spanning trees (MSTs), Parzen windows or mixture models, our technique expressly accounts for the relative ordering of the intensity values at different image locations and exploits the geometry of the image surface. Moreover, our method avoids the histogram binning problem and requires no critical parameter tuning. We extend the method to compute the joint density between two or more images. We apply our density estimation technique to the task of affine registration of 2D images using mutual information and show good results under high noise.
Mutual information (MI) based image-registration methods that use histograms are known to suffer from the so-called binning problem, caused by the absence of a principled technique for choosing the "optimal" number of bins to calculate the joint or marginal distributions. In this paper, we show that foregoing the notion of an image as a set of discrete pixel locations, and adopting a continuous representation is the solution to this problem. A new technique to calculate joint image histograms is proposed, which makes use of such a continuous representation. We report results on affine registration of a pair of 2D medical images under high noise, and demonstrate the smoothness of various information-theoretic similarity measures such as joint entropy (JE) or MI w.r.t. the transformation, when our proposed technique (referred to as the "robust histogram") is adopted to compute the required probability distributions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.