This issue of Sleep and Breathing presents the validation results of a new automated wake/sleep staging method based on EOG activity, developed by Jussi Virkkala from the Finnish Institute of Occupational Health. Classically, the automated method is compared to visual analysis, on an epoch by epoch basis. It reaches a level of global concordance of 88 % with a Kappa of 0.57. In other words, on the 248,696 epochs of the validation dataset, 212,138 were scored correctly in wake/sleep, that is as the human expert did it, and on 36,558 epochs, the two scorings differ.This level is considered good in literature focusing on evaluating automated methods. It shows as follows:1. Automated analysis methods are continuously developing [1]. 2. Performances increase.Performance wise, two trends in literature coexist: one aiming at evaluating inter-expert agreement (the percentage of epochs of a recording or a set of recordings for which two human scorers give exactly the same score), when not intraexpert agreement (the percentage of epochs of a recording or a set of recordings for which a human scorer give the same score, when scoring data twice within a given period of time) [2][3][4][5][6][7][8][9]. The other one aiming at evaluating performances of automated methods [10][11][12], compared to visual analysis. A recent publication demonstrated that on a dataset of 70 recordings, an automated method did not differ more than visual analysis from a reference scoring [13]. In other words, automated analysis can reach accuracy comparable to visual analysis. These levels of performances are new. Let us remember what automated analysis looked like only a few years ago. There was some vicious circle: automated analysis was disregarded, thus attracted only little attention and effort, and was therefore doomed to be unsatisfactory as it obviously takes talent and time to learn a machine to mimic the extremely complex operations that an experienced scorer does when scoring sleep. The vicious circle seems to turn virtuous as automated analysis becomes a topic of interest where highprofile research teams get involved.Now that the accuracy of this method is established, let us consider how it works. Indeed, when visual analysis is standardized, automated methods are very diverse: many alternative approaches to PSG are explored.As stated in the AASM manual, conventional PSG, which is necessary even for the not so simple discrimination between wake and sleep, requires a minimum of seven channels. Here, the proposed montage is respiratory polygraphy+3 sensors (2 EOG+ref). The EOG-based method validated in this paper belongs to a set of methods which all tend to reduce the number of sensors on the patient: actimetry [14], peripheral arterial tone and pulse transit time [15], motion analysis [16], EOG [17], and EEG only [18][19][20][21][22][23][24]. One question immediately appears: could the discrepancies observed between visual staging and the validated methods be explained by this reduction in the number of signals? Probably not, as in thi...