Several stories in the popular media have speculated that it may be possible to infer from the brain which word a person is speaking or even thinking. While recent studies have demonstrated that brain signals can give detailed information about actual and imagined actions, such as different types of limb movements or spoken words, concrete experimental evidence for the possibility to “read the mind,” i.e., to interpret internally-generated speech, has been scarce. In this study, we found that it is possible to use signals recorded from the surface of the brain (electrocorticography (ECoG)) to discriminate the vowels and consonants embedded in spoken and in imagined words, and we defined the cortical areas that held the most information about discrimination of the vowels and consonants. The results shed light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brainbased communication using imagined speech.
Some neurons in auditory cortex respond to recent stimulus history by adapting their response functions to track stimulus statistics directly, as might be expected. In contrast, some neurons respond to loud sounds by adjusting their response functions away from high intensities and consequently remain sensitive to softer sounds. In marmoset monkey auditory cortex, the latter type of adaptation appears to exist only in neurons tuned to stimulus intensity.
The mammalian cerebral cortex consists of multiple areas specialized for processing information for many different sensory modalities. Although the basic structure is similar for each cortical area, specialized neural connections likely mediate unique information processing requirements. Relative to primary visual (V1) and somatosensory (S1) cortices, little is known about the intrinsic connectivity of primary auditory cortex (A1). To better understand the flow of information from the thalamus to and through rat A1, we made use of a rapid, high-throughput screening method exploiting laser-induced uncaging of glutamate to construct excitatory input maps of individual neurons. We found that excitatory inputs to layer 2/3 pyramidal neurons were similar to those in V1 and S1; these cells received strong excitation primarily from layers 2-4. Both anatomical and physiological observations, however, indicate that inputs and outputs of layer 4 excitatory neurons in A1 contrast with those in V1 and S1. Layer 2/3 pyramids in A1 have substantial axonal arbors in layer 4, and photostimulation demonstrates that these pyramids can connect to layer 4 excitatory neurons. Furthermore, most or all of these layer 4 excitatory neurons project out of the local cortical circuit. Unlike S1 and V1, where feedback to layer 4 is mediated exclusively by indirect local circuits involving layer 2/3 projections to deep layers and deep feedback to layer 4, layer 4 of A1 integrates thalamic and strong layer 4 recurrent excitatory input with relatively direct feedback from layer 2/3 and provides direct cortical output.
Objectives Pure-tone audiometry has been a staple of hearing assessments for decades. Many different procedures have been proposed for measuring thresholds with pure tones by systematically manipulating intensity one frequency at a time until a discrete threshold function is determined. The authors have developed a novel nonparametric approach for estimating a continuous threshold audiogram using Bayesian estimation and machine learning classification. The objective of this study is to assess the accuracy and reliability of this new method relative to a commonly used threshold measurement technique. Design The authors performed air conduction pure-tone audiometry on 21 participants between the ages of 18 and 90 years with varying degrees of hearing ability. Two repetitions of automated machine learning audiogram estimation and 1 repetition of conventional modified Hughson-Westlake ascending-descending audiogram estimation were acquired by an audiologist. The estimated hearing thresholds of these two techniques were compared at standard audiogram frequencies (i.e., 0.25, 0.5, 1, 2, 4, 8 kHz). Results The two threshold estimate methods delivered very similar estimates at standard audiogram frequencies. Specifically, the mean absolute difference between estimates was 4.16 ± 3.76 dB HL. The mean absolute difference between repeated measurements of the new machine learning procedure was 4.51 ± 4.45 dB HL. These values compare favorably to those of other threshold audiogram estimation procedures. Furthermore, the machine learning method generated threshold estimates from significantly fewer samples than the modified Hughson-Westlake procedure while returning a continuous threshold estimate as a function of frequency. Conclusions The new machine learning audiogram estimation technique produces continuous threshold audiogram estimates accurately, reliably, and efficiently, making it a strong candidate for widespread application in clinical and research audiometry.
Even simple sensory stimuli evoke neural responses that are dynamic and complex. Are the temporally patterned neural activities important for controlling the behavioral output? Here, we investigated this issue. Our results reveal that in the insect antennal lobe, due to circuit interactions, distinct neural ensembles are activated during and immediately following the termination of every odorant. Such non-overlapping response patterns are not observed even when the stimulus intensity or identities were changed. In addition, we find that ON and OFF ensemble neural activities differ in their ability to recruit recurrent inhibition, entrain field-potential oscillations and more importantly in their relevance to behaviour (initiate versus reset conditioned responses). Notably, we find that a strikingly similar strategy is also used for encoding sound onsets and offsets in the marmoset auditory cortex. In sum, our results suggest a general approach where recurrent inhibition is associated with stimulus ‘recognition' and ‘derecognition'.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.