Motion segmentation is an important task in video surveillance and in many high-level vision applications. This paper proposes two generic methods for motion segmentation from surveillance video sequences captured from different kinds of sensors like aerial, Pan Tilt and Zoom (PTZ), thermal and night vision. Motion segmentation is achieved by employing Hotelling's T-Square test on the spatial neighborhood RGB color intensity values of each pixel in two successive temporal frames. Further, a modified version of Hotelling's T-Square test is also proposed to achieve motion segmentation. On comparison with Hotelling's T-Square test, the result obtained by the modified formula is better with respect to computational time and quality of the output. Experiments along with the qualitative and quantitative comparison with existing method have been carried out on the standard IEEE PETS (2006, 2009 and 2013) and IEEE Change Detection (2014) dataset to demonstrate the efficacy of the proposed method in the dynamic environment and the results obtained are encouraging.
Object segmentation and classification in the video sequence is a classical and critical problem that is constantly addressed due to vast real-time applications such as autonomous vehicles, smart surveillance, etc. Segmentation and classification of moving objects in video sequences captured from real-time non-constraint sequences is a challenging task. This paper presents a moving object segmentation and classification method for video sequences captured in a real-time environment. The key contributions of the paper include a method to detect and segment motion regions by applying the non-parametric Kolmogorov–Smirnov statistical test in the Spatio-temporal domain and a probabilistic neural network-based classification method to classify the moving objects into various classes. Promising results are obtained by experimentation using challenging PETS and Change Detection datasets. To corroborate the efficacy, a comparative analysis with contemporary method is also performed.
Motion segmentation is an important task in video surveillance and in many high level vision applications. In this paper, an adaptive method using statistics in temporal framework to segment moving objects from surveillance video sequences captured in dynamic environment is proposed. The proposed method first preprocesses the input frames of video using Gaussian filter for noise reduction. Motion segmentation is done by employing statistical T-test on neighborhood RGB color intensity values of each pixel in two successive temporal frames. Several experiments along with comparison with existing method have been carried out on the IEEE PETS (2009 and 2013) and IEEE Change Detection (2014) datasets which include thermal, normal, PTZ, aerial and night vision sensor videos to demonstrate the efficacy of the proposed methods in dynamic environment and results obtained are encouraging.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.