2005
DOI: 10.1016/j.heares.2004.07.010
|View full text |Cite
|
Sign up to set email alerts
|

The effect of different noise types on the speech and non-speech elicited mismatch negativity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

10
39
0
1

Year Published

2008
2008
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 68 publications
(50 citation statements)
references
References 26 publications
10
39
0
1
Order By: Relevance
“…In support of research studies examining the infl uence of background noise on CV processing (Martin et al, 1997;Shtyrov et al, 1998;Martin et al, 1999;Kozou et al, 2005;Martin & Stapells, 2005), the N1 and P2 activation in the anterior region appeared to be reduced and/or delayed Sequence and timing of a single trial. One trial contained three spoken words, prime (S1), target (S2), and probe (S3), separated by inter-stimulus intervals (ISI) as specifi ed.…”
Section: Anterior Sfsupporting
confidence: 56%
See 1 more Smart Citation
“…In support of research studies examining the infl uence of background noise on CV processing (Martin et al, 1997;Shtyrov et al, 1998;Martin et al, 1999;Kozou et al, 2005;Martin & Stapells, 2005), the N1 and P2 activation in the anterior region appeared to be reduced and/or delayed Sequence and timing of a single trial. One trial contained three spoken words, prime (S1), target (S2), and probe (S3), separated by inter-stimulus intervals (ISI) as specifi ed.…”
Section: Anterior Sfsupporting
confidence: 56%
“…The impact of noise on early cortical components such as the P1, N1, P2, MMN and P3a, primarily linked with pre-attentive stages (Martin et al, 1997;Shtyrov et al, 1998;Martin et al, 1999;Kozou et al, 2005;Martin & Stapells, 2005), has been well established while listening to speech syllables, and is generally characterized as reduced amplitudes and/ or delayed latencies at vertex electrode sites that are typical to early auditory perception.…”
mentioning
confidence: 99%
“…Jaramillo et al (2001) found that speech stimuli were more efficiently processed than harmonical tones (non-speech stimuli) as reflected by an enhanced MMN and P3a ERP components. In agreement, Kozou et al (2005) found that speech and non-speech sounds of approximately equal complexity were processed differently in both silence, and noisy conditions. In silence, speech stimuli elicited an MMN response that had smaller amplitude and a longer duration than that elicited by equally complex non-speech stimuli.…”
Section: Introductionsupporting
confidence: 66%
“…The entire task consisted of a total of 1,000 auditory stimuli, with an interstimulus interval of 850 ms. Stimuli were delivered through two loudspeakers at a distance of 100 cm from the subjects, delivering approximately 75 dB SPL to both ears. Continuous white noise with a bandwidth from 125 to 8,000 Hz was added to the signal to create a signal-to-noise ratio of 10 dB [Kozou et al, 2005]. The two loudspeakers were positioned so that they were angled at 45° to the subjects' ears and at 90° to each other.…”
Section: Electroencephalographic Recording and Processing Proceduresmentioning
confidence: 99%