Human faces convey a collection of information, such as gender, identity, and emotional states. Therefore, understanding the differences between volunteers’ eye movements on benchmark tests of face recognition and perception can explicitly indicate the most discriminating regions to improve performance in this visual cognitive task. The aim of this work is to qualify and classify these eye strategies using multivariate statistics and machine learning techniques, achieving up to 94.8% accuracy. Our experimental results show that volunteers have focused their visual attention, on average, at the eyes, but those with superior performance in the tests carried out have looked at the nose region more closely.
The neural activation patterns provoked in response to music listening can reveal whether a subject did or did not receive music training. In the current exploratory study, we have approached this two-group (musicians and nonmusicians) classification problem through a computational framework composed of the following steps: Acoustic features extraction; Acoustic features selection; Trigger selection; EEG signal processing; and Multivariate statistical analysis. We are particularly interested in analyzing the brain data on a global level, considering its activity registered in electroencephalogram (EEG) signals on a given time instant. Our experiment's results—with 26 volunteers (13 musicians and 13 nonmusicians) who listened the classical music Hungarian Dance No. 5 from Johannes Brahms—have shown that is possible to linearly differentiate musicians and nonmusicians with classification accuracies that range from 69.2% (test set) to 93.8% (training set), despite the limited sample sizes available. Additionally, given the whole brain vector navigation method described and implemented here, our results suggest that it is possible to highlight the most expressive and discriminant changes in the participants brain activity patterns depending on the acoustic feature extracted from the audio.
In this work, we extend a standard and successful acoustic feature extraction approach based on trigger selection to examples of Brazilian Bossa-Nova and Heitor Villa Lobos music pieces. Additionally, we propose and implement a computational framework to disclose whether all the acoustic features extracted are statistically relevant, that is, non-redundant. Our experimental results show that not all these well-known features might be necessary for trigger selection, given the multivariate statistical redundancy found, which associated all these acoustic features into 3 clusters with different factor loadings and, consequently, representatives.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.