Single-trial analyses have the potential to uncover meaningful brain dynamics that are obscured when averaging across trials. However, low signal-to-noise ratio (SNR) can impede the use of single-trial analyses and decoding methods. In this study, we investigate the applicability of a single-trial approach to decode stimulus modality from magnetoencephalography (MEG) high frequency activity. In order to classify the auditory versus visual presentation of words, we combine beamformer source reconstruction with the random forest classification method. To enable group level inference, the classification is embedded in an across-subjects framework.We show that single-trial gamma SNR allows for good classification performance (accuracy across subjects: 66.44 %). This implies that the characteristics of high frequency activity have a high consistency across trials and subjects. The random forest classifier assigned informational value to activity in both auditory and visual cortex with high spatial specificity. Across time, gamma power was most informative during stimulus presentation. Among all frequency bands, the 75-95 Hz band was the most informative frequency band in visual as well as in auditory areas. Especially in visual areas, a broad range of gamma frequencies (55-125 Hz) contributed to the successful classification.Thus, we demonstrate the feasibility of single-trial approaches for decoding the stimulus modality across subjects from high frequency activity and describe the discriminative gamma activity in time, frequency, and space.
Author SummaryAveraging brain activity across trials is a powerful way to increase signal-to-noise ratio in MEG data. This approach, however, potentially obscures meaningful brain dynamics that unfold on the single-trial level. Single-trial analyses have been successfully applied to time domain or low frequency oscillatory activity; its application to MEG high frequency activity is hindered by the low amplitude of these signals. In the present study, we show that stimulus modality (visual versus auditory presentation of words) 2 not peer-reviewed) is the author/funder. All rights reserved. No reuse allowed without permission.The copyright holder for this preprint (which was . http://dx.doi.org/10.1101/202424 doi: bioRxiv preprint first posted online Oct. 12, 2017; can successfully be decoded from single-trial MEG high frequency activity by combining source reconstruction with a random forest classification algorithm. This approach reveals patterns of activity above 75 Hz in both visual and auditory cortex, highlighting the importance of high frequency activity for the processing of domain-specific stimuli. Thereby, our results extend prior findings by revealing high-frequency activity in auditory cortex related to auditory word stimuli in MEG data. The adopted across-subjects framework furthermore suggests a high inter-individual consistency in the high frequency activity patterns.