Might EEG measured while one imagines words or sentences provide enough information for one to identify what is being thought? Analysis of EEG data from an experiment in which two syllables are spoken in imagination in one of three rhythms shows that information is present in EEG alpha, beta and theta bands. Envelopes are used to compute filters matched to a particular experimental condition; the filters' action on data from a particular trial lets one determine the experimental condition used for that trial with appreciably greater-than-chance performance. Informative spectral features within bands lead us to current work with EEG spectrograms.
We conducted an experiment to determine whether the rhythm with which imagined syllables are produced may be decoded from EEG recordings. High density EEG data were recorded for seven subjects while they produced in imagination one of two syllables in one of three different rhythms. We used a modified second-order blind identification (SOBI) algorithm to remove artefact signals and reduce data dimensionality. The algorithm uses the consistent temporal structure along multi-trial EEG data to blindly decompose the original recordings. For the four primary SOBI components, joint temporal and spectral features were extracted from the Hilbert spectra (HS) obtained by a Hilbert-Huang transformation (HHT). The HS provide more accurate time-spectral representations of non-stationary data than do conventional techniques like short-time Fourier spectrograms and wavelet scalograms. Classification of the three rhythms yields promising results for inter-trial transfer, with performance for all subjects significantly greater than chance. For comparison, we tested classification performance of three averaging-based methods, using features in the temporal, spectral and time-frequency domains, respectively, and the results are inferior to those of the SOBI-HHT-based method. The results suggest that the rhythmic structure of imagined syllable production can be detected in non-invasive brain recordings and provide a step towards the development of an EEG-based system for communicating imagined speech.
Speech perception requires the successful interpretation of both phonetic and syllabic information in the auditory signal. It has been suggested by Poeppel (2003) that phonetic processing requires an optimal time scale of 25 ms while the time scale of syllabic processing is much slower (150–250ms). To better understand the operation of brain networks at these characteristic time scales during speech perception, we studied the spatial and dynamic properties of EEG responses to five different stimuli: (1) amplitude modulated (AM) speech, (2) AM speech with added broadband noise, (3) AM reversed speech, (4) AM broadband noise, and (5) AM pure tone. Amplitude modulation at gamma band frequencies (40 Hz) elicited steady-state auditory evoked responses (SSAERs) bilaterally over primary auditory cortices. Reduced SSAERs were observed over the left auditory cortex only for stimuli containing speech. In addition, we found over the left hemisphere, anterior to primary auditory cortex, a network whose instantaneous frequencies in the theta to alpha band (4–16 Hz) are correlated with the amplitude envelope of the speech signal. This correlation was not observed for reversed speech. The presence of speech in the sound input activates a 4–16 Hz envelope tracking network and suppresses the 40-Hz gamma band network which generates the steady-state responses over the left auditory cortex. We believe these findings to be consistent with the idea that processing of the speech signals involves preferentially processing at syllabic time scales rather than phonetic time scales.
Purpose To uncover the evaluation information on the academic contribution of research papers cited by peers based on the content cited by citing papers, and to provide an evidence-based tool for evaluating the academic value of cited papers. Design/methodology/approach CiteOpinion uses a deep learning model to automatically extract citing sentences from representative citing papers; it starts with an analysis on the citing sentences, then it identifies major academic contribution points of the cited paper, positive/negative evaluations from citing authors and the changes in the subjects of subsequent citing authors by means of Recognizing Categories of Moves (problems, methods, conclusions, etc.), and sentiment analysis and topic clustering. Findings Citing sentences in a citing paper contain substantial evidences useful for academic evaluation. They can also be used to objectively and authentically reveal the nature and degree of contribution of the cited paper reflected by citation, beyond simple citation statistics. Practical implications The evidence-based evaluation tool CiteOpinion can provide an objective and in-depth academic value evaluation basis for the representative papers of scientific researchers, research teams, and institutions. Originality/value No other similar practical tool is found in papers retrieved. Research limitations There are difficulties in acquiring full text of citing papers. There is a need to refine the calculation based on the sentiment scores of citing sentences. Currently, the tool is only used for academic contribution evaluation, while its value in policy studies, technical application, and promotion of science is not yet tested.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.