In the present paper, we propose a source camera identification (SCI) method for mobile devices based on deep learning. Recently, convolutional neural networks (CNNs) have shown a remarkable performance on several tasks such as image recognition, video analysis or natural language processing. A CNN consists on a set of layers where each layer is composed by a set of high pass filters which are applied all over the input image. This convolution process provides the unique ability to extract features automatically from data and to learn from those features. Our proposal describes a CNN architecture which is able to infer the noise pattern of mobile camera sensors (also known as camera fingerprint) with the aim at detecting and identifying not only the mobile device used to capture an image (with a 98% of accuracy), but also from which embedded camera the image was captured. More specifically, we provide an extensive analysis on the proposed architecture considering different configurations. The experiment has been carried out using the images captured from different mobile devices cameras (MICHE-I Dataset) and the obtained results have proved the robustness of the proposed method.
During the last decade, researchers have verified that clothing can provide information for gender recognition. However, before extracting features, it is necessary to segment the clothing region. We introduce a new clothes segmentation method based on the application of the GrabCut technique over a trixel mesh, obtaining very promising results for a close to real time system. Finally, the clothing features are combined with facial and head context information to outperform previous results in gender recognition with a public database.
Gender Classification (GC) is a natural ability that belongs to the human beings. Recent improvements in computer vision provide the possibility to extract information for different classification/recognition purposes. Gender is a soft biometrics useful in video surveillance, especially in uncontrolled contexts such as low-light environments, with arbitrary poses, facial expressions, occlusions and motion blur. In this work we present a methodology for the construction of a gait analyzer. The methodology is divided into three major steps: (1) data extraction, where body keypoints are extracted from video sequences; (2) feature creation, where body features are constructed using body keypoints; and(3) classifier selection when such data are used to train four different classifiers in order to determine the one that best performs. The results are analyzed on the dataset Gotcha, characterized by user and camera either in motion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.