2020
DOI: 10.1038/s41598-020-66824-x
|View full text |Cite
|
Sign up to set email alerts
|

ERP mismatch response to phonological and temporal regularities in speech

Abstract: Predictions of our sensory environment facilitate perception across domains. During speech perception, formal and temporal predictions may be made for phonotactic probability and syllable stress patterns, respectively, contributing to the efficient processing of speech input. The current experiment employed a passive eeG oddball paradigm to probe the neurophysiological processes underlying temporal and formal predictions simultaneously. The component of interest, the mismatch negativity (MMN), is considered a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
24
2

Year Published

2021
2021
2021
2021

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 19 publications
(27 citation statements)
references
References 59 publications
1
24
2
Order By: Relevance
“…We included several theoretical, lexical, and phonotactic factors that are known to influence results. While other studies found an influence on neural data, for example, of phonotactic probabilities (Bonte et al, 2005 ; Yasin, 2007 ; Emmendorfer et al, 2020 ) or the lexical frequency of words (Alexandrov et al, 2011 ; Shtyrov et al, 2011 ; Aleksandrov et al, 2017 ), we cannot provide evidence for those factors neither on the electrophysiological nor on the behavioral data. On the contrary, we have identified a new influencing factor on MMN data: we found that neural effects were not only driven by phonemic features but also by the perceptual and psychoacoustic differences in perceived loudness in the stimuli.…”
Section: General Discussion and Conclusioncontrasting
confidence: 65%
See 2 more Smart Citations
“…We included several theoretical, lexical, and phonotactic factors that are known to influence results. While other studies found an influence on neural data, for example, of phonotactic probabilities (Bonte et al, 2005 ; Yasin, 2007 ; Emmendorfer et al, 2020 ) or the lexical frequency of words (Alexandrov et al, 2011 ; Shtyrov et al, 2011 ; Aleksandrov et al, 2017 ), we cannot provide evidence for those factors neither on the electrophysiological nor on the behavioral data. On the contrary, we have identified a new influencing factor on MMN data: we found that neural effects were not only driven by phonemic features but also by the perceptual and psychoacoustic differences in perceived loudness in the stimuli.…”
Section: General Discussion and Conclusioncontrasting
confidence: 65%
“…For instance, since the phonological feature oppositions to distinguish different vowel qualities (i.e., high vowels vs. low vowels) are based on formants (Lahiri and Reetz, 2010 ), they also automatically imply an acoustic difference. Moreover, when words are used as stimuli, lexical features like frequency of occurrence (Alexandrov et al, 2011 ; Shtyrov et al, 2011 ) or phonotactic probability (Bonte et al, 2005 ; Yasin, 2007 ; Emmendorfer et al, 2020 ) are known to interfere in speech perception and vowel discrimination. Especially in our approach, where we tested the hypotheses of the models by using natural German spoken words, those influences may contribute to patterns of results.…”
Section: Explorative Analysis For Additional Influential Factors In Mmentioning
confidence: 99%
See 1 more Smart Citation
“…There is ample neural evidence supporting the aforementioned behavioral observations during speech perception, with variations in phonotactic probability and stress patterns modulations neural processing (Bonte et al, 2005; Di Liberto et al, 2019; Emmendorfer et al, 2020; Rothermich et al, 2012; Tremblay et al, 2016). However, data on the neural correlates of these features in speech production is sparse.…”
Section: Introductionmentioning
confidence: 72%
“…The oddball stimuli paradigm ( Emmendorfer et al, 2020 ; Teixeira-Santos et al, 2020 ) with speech stimuli (350 ms duration with a 10-ms rise and fall time) was used in this study ( Figure 1 ). The stimuli were delivered via E-Prime 2.0 (Psychology Software Tools, Inc.) and through two loudspeakers at both ears at a comfortable level of 65 dB SPL.…”
Section: Methodsmentioning
confidence: 99%