Visual neurons coordinate their responses in relation to the stimulus; however, the complex interplay between a stimulus and the functional dynamics of an assembly still eludes neuroscientists. To this aim, we recorded cell assemblies from multi-electrodes in the primary visual cortex of anaesthetized cats in response to randomly presented sine-wave drifting gratings whose orientation tilted in 22.5° steps. Cross-correlograms divulged the functional connections at all the tested orientations. We show that a cell-assembly discriminates between orientations by recruiting a 'salient' functional network at every presented orientation, wherein, the connections and their strengths (peak-probabilities in the cross-correlogram) change from one orientation to another. Within these assemblies, closely tuned neurons exhibited increased connectivity and connection-strengths than differently tuned neurons. Minimal connectivity between untuned neurons suggests the significance of neuronal selectivity in assemblies. This study reflects upon the dynamics of functional connectivity, and brings to the fore the importance of a 'signature' functional network in an assembly that is strictly related to a specific stimulus. Apparently, it points to the fact that an assembly is the major 'functional unit' of information processing in cortical circuits, rather than the individual neurons.
The use of electroencephalogram (EEG) as the main input signal in brain-machine interfaces has been widely proposed due to the non-invasive nature of the EEG. Here we are specifically interested in interfaces that extract information from the auditory system and more specifically in the task of classifying heard speech from EEGs. To do so, we propose to limit the preprocessing of the EEGs and use machine learning approaches to automatically extract their meaningful characteristics. More specifically, we use a regulated recurrent neural network (RNN) reservoir, which has been shown to outperform classic machine learning approaches when applied to several different bio-signals, and we compare it with a deep neural network approach. Moreover, we also investigate the classification performance as a function of the number of EEG electrodes. A set of 8 subjects were presented randomly with 3 different auditory stimuli (English vowels a, i and u). We obtained an excellent classification rate of 83.2% with the RNN when considering all 64 electrodes. A rate of 81.7% was achieved with only 10 electrodes.
Grounding is the problem of correspondence between the symbolic concepts of language and the physical environment. The research direction that we propose to tackle language acquisition and grounding is based on multimodal event-based representations and probabilistic generative modeling. First, we establish a new multimodal dataset recorded from a mobile robot and describe how such multimodal signals can be efficiently encoded into compact, event-based representations using sparse coding. We highlight how they could be better suited to ground concepts. We then describe a generative probabilistic model based on those event-based representations. We discuss possible applications of this probabilistic framework in the context of a cognitive agent, such as detecting novelty at the inputs or reasoning by building internal simulations of the environment. While this work is still in progress, this could open new perspectives on how representational learning can play a key role in the ability to map structures of the multimodal scene to language.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.