Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the “ENVIHAB” facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.
In this paper, we develop a face detection hindering method, as a means of preventing the threats to people's privacy, automatic video analysis may pose. Face detection in images or videos is the first step in human-centered video analysis to be followed, e.g. by automatic face recognition. Therefore, by hindering face detection, we also render automatic face recognition improbable. To this end, we examine the application of two methods. First, we consider a naive approach, i.e., we simply use additive or impulsive noise to the input image, until the point where the face cannot be automatically detected anymore. Second, we examine the application of the SVD-DID face de-identification method. Our experimental results denote that both methods attain high face detection failure rates.
General rightsThis document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Abstract-In this article, a number of methods are analyzed that manipulate images in a manner that hinders face recognition by automatic recognition algorithms. The purpose of these methods, is to partly degrade image quality, so that humans can identify the person or persons in a scene, while common classification algorithms fail to do so. The approach used to achieve this involves the use of singular value decomposition (SVD) and projections on hyperspheres. From experiments it can be concluded that, these methods reduce the percentage of correct classification rate by over 90% . In addition, the final image is not degraded beyond recognition by humans.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.