In this paper, we examine the suitability of correlogram for background subtraction, as a step towards moving object detection. Correlogram captures inter-pixel relationships in a region and is seen to be effective for modeling the dynamic backgrounds. A multi-channel correlogram is proposed using inter-channel and intra-channel correlograms to exploit full color information and the inter-pixel relations on the same color planes and across the planes. We thereafter derive a novel feature, termed multi-channel kernel fuzzy correlogram, composed by applying a fuzzy membership transformation over multi-channel correlogram. Multi-channel kernel fuzzy correlogram maps multi-channel correlogram into a reduced dimensionality space and is less sensitivity to noise. The approach handles multimodal distributions without using multiple models per pixel unlike traditional approaches. The approach does not require ideal background frames for background model initialization and can be initialized with moving objects also. Effectiveness of the proposed method is illustrated on different video sequences.
Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.
We propose a new algorithm for moving object detection in the presence of challenging dynamic background conditions. We use a set of fuzzy aggregated multifeature similarity measures applied on multiple models corresponding to multimodal backgrounds. The algorithm is enriched with a neighborhood-supported model initialization strategy for faster convergence. A model level fuzzy aggregation measure driven background model maintenance ensures more robustness. Similarity functions are evaluated between the corresponding elements of the current feature vector and the model feature vectors. Concepts from Sugeno and Choquet integrals are incorporated in our algorithm to compute fuzzy similarities from the ordered similarity function values for each model. Model updating and the foreground/background classification decision is based on the set of fuzzy integrals. Our proposed algorithm is shown to outperform other multi-model background subtraction algorithms. The proposed approach completely avoids explicit offline training to initialize background model and can be initialized with moving objects also. The feature space uses a combination of intensity and statistical texture features for better object localization and robustness. Our qualitative and quantitative studies illustrate the mitigation of varieties of challenging situations by our approach.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.