Music is a powerful, pleasurable stimulus that can induce positive feelings and can therefore be used for emotional self-regulation. Musical activities such as listening to music, playing an instrument, singing or dancing are also an important source for social contact, promoting interaction and the sense of belonging with others. Recent evidence has suggested that after retirement, other functions of music, such as self-conceptual processing related to autobiographical memories, become more salient. However, few studies have addressed the meaningfulness of music in the elderly. This study aims to investigate elderly people’s habits and preferences related to music, study the role music plays in their everyday life, and explore the relationship between musical activities and emotional well-being across different countries of Europe. A survey will be administered to elderly people over the age of 65 from five different European countries (Bosnia and Herzegovina, Czechia, Germany, Ireland, and UK) and to a control group. Participants in both groups will be asked about basic sociodemographic information, habits and preferences in their participation in musical activities and emotional well-being. Overall, the aim of this study is to gain a deeper understanding of the role of music in the elderly from a psychological perspective. This advanced knowledge could help to develop therapeutic applications, such as musical recreational programs for healthy older people or elderly in residential care, which are better able to meet their emotional and social needs.
The temporal and spatial neural processing of faces has been investigated rigorously, but few studies have unified these dimensions to reveal the spatio-temporal dynamics postulated by the models of face processing. We used support vector machine decoding and representational similarity analysis to combine information from different locations (fMRI), time windows (EEG), and theoretical models. By correlating information matrices derived from pairwise classifications of neural responses to different facial expressions (neutral, happy, fearful, angry), we found early EEG time windows (starting around 130 ms) to match fMRI data from early visual cortex (EVC), and later time windows (starting around 190ms) to match data from occipital and fusiform face areas (OFA/FFA) and posterior superior temporal sulcus (pSTS).According to model comparisons, the EEG classifications were based more on low-level visual features than expression intensities or categories. In fMRI, the model comparisons revealed change along the processing hierarchy, from low-level visual feature coding in EVC to coding of intensity of expressions in the right pSTS. The results highlight the importance of a multimodal approach for understanding the functional roles of different brain regions in face processing.
Many studies of visual working memory have tested humans' ability to reproduce primary visual features of simple objects, such as the orientation of a grating or the hue of a color patch, following a delay. A consistent finding of such studies is that precision of responses declines as the number of items in memory increases. Here we compared visual working memory for primary features and high-level objects. We presented participants with memory arrays consisting of oriented gratings, facial expressions, or a mixture of both. Precision of reproduction for all facial expressions declined steadily as the memory load was increased from one to five faces. For primary features, this decline and the specific distributions of error observed, have been parsimoniously explained in terms of neural population codes. We adapted the population coding model for circular variables to the non-circular and bounded parameter space used for expression estimation. Total population activity was held constant according to the principle of normalization and the intensity of expression was decoded by drawing samples from the Bayesian posterior distribution. The model fit the data well, showing that principles of population coding can be applied to model memory representations at multiple levels of the visual hierarchy. When both gratings and faces had to be remembered, an asymmetry was observed. Increasing the number of faces decreased precision of orientation recall, but increasing the number of gratings did not affect recall of expression, suggesting that memorizing faces involves the automatic encoding of low-level features, in addition to higher-level expression information.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.