In neuroscience, stimulus-response relationships have traditionally been analyzed using either encoding or decoding models. Here we propose a hybrid approach that decomposes neural activity into multiple components, each representing a portion of the stimulus. The technique is implemented via canonical correlation analysis (CCA) by temporally filtering the stimulus (encoding) and spatially filtering the neural responses (decoding) such that the resulting components are maximally correlated. In contrast to existing methods, this approach recovers multiple correlated stimulus-response pairs, and thus affords a richer, multidimensional analysis of neural representations. We first validated the technique's ability to recover multiple stimulus-driven components using electroencephalographic (EEG) data simulated with a finite element model of the head. We then applied the technique to real EEG responses to auditory and audiovisual narratives experienced identically across subjects, as well as uniquely experienced video game play. During narratives, both auditory and visual stimulus-response correlations (SRC) were modulated by attention and tracked inter-subject correlations. During video game play, SRC varied with game difficulty and the presence of a dual task. Interestingly, the strongest component extracted for visual and auditory features of film clips had nearly identical spatial distributions, suggesting that the predominant encephalographic response to naturalistic stimuli is supramodal. The diversity of these findings demonstrates the utility of measuring multidimensional SRC via hybrid encoding-decoding.
Human brain mapping relies heavily on fMRI, ECoG and EEG, which capture different physiological signals. Relationships between these signals have been established in the context of specific tasks or during resting state, often using spatially confined concurrent recordings in animals. But it is not certain whether these correlations generalize to other contexts relevant for human cognitive neuroscience. Here, we address the case of complex naturalistic stimuli and ask two basic questions. First, how reliable are the responses evoked by a naturalistic audio-visual stimulus in each of these imaging methods, and second, how similar are stimulus-related responses across methods? To this end, we investigated a wide range of brain regions and frequency bands. We presented the same movie clip twice to three different cohorts of subjects (N = 45, N = 11, N = 5) and assessed stimulus-driven correlations across viewings and between imaging methods, thereby ruling out task-irrelevant confounds. All three imaging methods had similar repeat-reliability across viewings when fMRI and EEG data were averaged across subjects, highlighting the potential to achieve large signal-to-noise ratio by leveraging large sample sizes. The fMRI signal correlated positively with high-frequency ECoG power across multiple task-related cortical structures but positively with low-frequency EEG and ECoG power. In contrast to previous studies, these correlations were as strong for low-frequency as for high frequency ECoG. We also observed links between fMRI and infra-slow EEG voltage fluctuations. These results extend previous findings to the case of natural stimulus processing.
Videos and commercials produced for large audiences can elicit mixed opinions. We wondered whether this diversity is also reflected in the way individuals watch the videos. To answer this question, we presented 65 commercials with high production value to 25 individuals while recording their eye movements, and asked them to provide preference ratings for each video. We find that gaze positions for the most popular videos are highly correlated. To explain the correlations of eye movements, we model them as “interactions” between individuals. A thermodynamic analysis of these interactions shows that they approach a “critical” point such that any stronger interaction would put all viewers into lock-step and any weaker interaction would fully randomise patterns. At this critical point, groups with similar collective behaviour in viewing patterns emerge while maintaining diversity between groups. Our results suggest that popularity of videos is already evident in the way we look at them, and that we maintain diversity in viewing behaviour even as distinct patterns of groups emerge. Our results can be used to predict popularity of videos and commercials at the population level from the collective behaviour of the eye movements of a few viewers.
Abstract-Detection of mine-like objects (MLOs) in sidescan sonar imagery is a problem that affects our military in terms of safety and cost. The current process involves large amounts of time for subject matter experts to analyze sonar images searching for MLOs. The automation of the detection process has been heavily researched over the years and some of these computer vision approaches have improved dramatically, providing substantial processing speed benefits. However, the human visual system has an unmatched ability to recognize objects of interest. This paper posits a brain-computer interface (BCI) approach, that combines the complementary benefits of computer vision and human vision. The first stage of the BCI, a Haar-like feature classifier, is cascaded in to the second stage, rapid serial visual presentation (RSVP) of images chips. The RSVP paradigm maximizes throughput while allowing an electroencephalography (EEG) interest classifier to determine the human subjects' recognition of objects. In an additional proposed BCI system we add a third stage that uses a trained support vector machine (SVM) based on the Haar-like features of stage one and the EEG interest scores of stage two. We characterize and show performance improvements for subsets of these BCI systems over the computer vision and human vision capabilities alone.Index Terms-Boosting, brain-computer interface (BCI), minelike object (MLO), object detection, rapid serial visual presentation (RSVP), sidescan sonar.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.