Recently, brain-computer interface (BCI) research has extended to investigate its possible use in motor rehabilitation. Most of these investigations have focused on the upper body. Only few studies consider gait because of the difficulty of recording EEG during gross movements. However, for stroke patients the rehabilitation of gait is of crucial importance. Therefore, this study investigates if a BCI can be based on walking related desynchronization features. Furthermore, the influence of complexity of the walking movements on the classification performance is investigated. Two BCI experiments were conducted in which healthy subjects performed a cued walking task, a more complex walking task (backward or adaptive walking), and imagination of the same tasks. EEG data during these tasks was classified into walking and no-walking. The results from both experiments show that despite the automaticity of walking and recording difficulties, brain signals related to walking could be classified rapidly and reliably. Classification performance was higher for actual walking movements than for imagined walking movements. There was no significant increase in classification performance for both the backward and adaptive walking tasks compared with the cued walking tasks. These results are promising for developing a BCI for the rehabilitation of gait.
Facial expressions are behavioural cues that represent an affective state. Because of this, they are an unobtrusive alternative to affective self-report. The perceptual identification of facial expressions can be performed automatically with technological assistance. Once the facial expressions have been identified, the interpretation is usually left to a field expert. However, facial expressions do not always represent the felt affect; they can also be a communication tool. Therefore, facial expression measurements are prone to the same biases as self-report. Hence, the automatic measurement of human affect should also make inferences on the nature of the facial expressions instead of describing facial movements only. We present two experiments designed to assess whether such automated inferential judgment could be advantageous. In particular, we investigated the differences between posed and spontaneous smiles. The aim of the first experiment was to elicit both types of expressions. In contrast to other studies, the temporal dynamics of the elicited posed expression were not constrained by the eliciting instruction. Electromyography (EMG) was used to automatically discriminate between them. Spontaneous smiles were found to differ from posed smiles in magnitude, onset time, and onset and offset speed independently of the producer’s ethnicity. Agreement between the expression type and EMG-based automatic detection reached 94% accuracy. Finally, measurements of the agreement between human video coders showed that although agreement on perceptual labels is fairly good, the agreement worsens with inferential labels. A second experiment confirmed that a layperson’s accuracy as regards distinguishing posed from spontaneous smiles is poor. Therefore, the automatic identification of inferential labels would be beneficial in terms of affective assessments and further research on this topic.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.