2016
DOI: 10.3389/fnhum.2016.00234
|View full text |Cite
|
Sign up to set email alerts
|

Modulation of Auditory Responses to Speech vs. Nonspeech Stimuli during Speech Movement Planning

Abstract: Previously, we showed that the N100 amplitude in long latency auditory evoked potentials (LLAEPs) elicited by pure tone probe stimuli is modulated when the stimuli are delivered during speech movement planning as compared with no-speaking control conditions. Given that we probed the auditory system only with pure tones, it remained unknown whether the nature and magnitude of this pre-speech auditory modulation depends on the type of auditory stimulus. Thus, here, we asked whether the effect of speech movement … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
30
0
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 30 publications
(32 citation statements)
references
References 54 publications
1
30
0
1
Order By: Relevance
“…A decrease in the amplitudes of M100 and M200 components (and their EEG counterparts N100 and P200 components) for self-generated tones is a very robust effect, reliably demonstrating the influences that actions can have on auditory processing [ 1 , 2 , 4 , 18 , 19 ]. In addition, many studies demonstrated a functional disassociation between the two components [ 19 , 20 , 21 , 22 , 23 ], which is also supported by our results showing different characteristics of both components over testing sessions in sham and real stimulation conditions. The M100 attenuation in the baseline session (pre-sham and pre-real) was localized in auditory cortex, which we interpret as a result of predictions from the forward model.…”
Section: Discussionsupporting
confidence: 90%
See 1 more Smart Citation
“…A decrease in the amplitudes of M100 and M200 components (and their EEG counterparts N100 and P200 components) for self-generated tones is a very robust effect, reliably demonstrating the influences that actions can have on auditory processing [ 1 , 2 , 4 , 18 , 19 ]. In addition, many studies demonstrated a functional disassociation between the two components [ 19 , 20 , 21 , 22 , 23 ], which is also supported by our results showing different characteristics of both components over testing sessions in sham and real stimulation conditions. The M100 attenuation in the baseline session (pre-sham and pre-real) was localized in auditory cortex, which we interpret as a result of predictions from the forward model.…”
Section: Discussionsupporting
confidence: 90%
“…This explanation of M200 component is supported by other studies showing that the M200 amplitude (or P200 in EEG studies) increases when a stimulus cannot be predicted [ 29 ] and when a predicted stimulus is omitted [ 30 ] or violated [ 31 ]. This may also explain why, in EEG studies, N100 attenuation, but not P200 attenuation, was observed in the following two cases: (1) when a stimulus was followed by actions of an atypical effector, like the eye [ 19 ] or the foot [ 21 ], and (2) when a non-speech stimulus followed speech movement planning [ 22 ]. In summary, M100/N100 attenuation may be the result of predictions from forward models that act on low-level features of the stimulus.…”
Section: Discussionmentioning
confidence: 99%
“…However, there may be cases where one solution is preferred over the other. One such case is when only a portion of the source space is to be reconstructed, for example placing a small number of 'virtual electrodes' at regions of interest (Engels et al, 2016;Hillebrand et al, 2016). wLCMV would be preferred in this case, as the beamformer solution at one dipole is not dependent on the rest of the source space (Van Veen et al, 1997).…”
Section: Small Amount Of Cross Talkmentioning
confidence: 99%
“…Taking into account studies suggesting that the vocal motor system can modulate auditory cortical processing (Behroozmand, et al, 2016; Behroozmand, et al, 2015; Chang, et al, 2013; Cogan, et al, 2014; Daliri & Max, 2016; Greenlee, et al, 2013; Jenson, Harkrider, Thornton, Bowers, & Saltuklaroglu, 2015; Sitek, et al, 2013) we hypothesized that during vocalization, SoA-related motor activity should alter functional characteristics of auditory perceptual neuronal networks. Specifically we hypothesized that there should be a difference between bioelectrical brain responses with and without the presence of SoA associated with vocalization.…”
Section: Introductionmentioning
confidence: 99%