Developing reliable methodologies to decode brain state information from electroencephalogram (EEG) signals is an open challenge, crucial to implementing EEG-based brain–computer interfaces (BCIs). For example, signal processing methods that identify brain states could allow motor-impaired patients to communicate via non-invasive, EEG-based BCIs. In this work, we focus on the problem of distinguishing between the states of eyes closed (EC) and eyes open (EO), employing quantities based on permutation entropy (PE). An advantage of PE analysis is that it uses symbols (ordinal patterns) defined by the ordering of the data points (disregarding the actual values), hence providing robustness to noise and outliers due to motion artifacts. However, we show that for the analysis of multichannel EEG recordings, the performance of PE in discriminating the EO and EC states depends on the symbols’ definition and how their probabilities are estimated. Here, we study the performance of PE-based features for EC/EO state classification in a dataset of N=107 subjects with one-minute 64-channel EEG recordings in each state. We analyze features obtained from patterns encoding temporal or spatial information, and we compare different approaches to estimate their probabilities (by averaging over time, over channels, or by “pooling”). We find that some PE-based features provide about 75% classification accuracy, comparable to the performance of features extracted with other statistical analysis techniques. Our work highlights the limitations of PE methods in distinguishing the eyes’ state, but, at the same time, it points to the possibility that subject-specific training could overcome these limitations.