2013
DOI: 10.1080/01690965.2012.672229
|View full text |Cite
|
Sign up to set email alerts
|

When cues combine: How distal and proximal acoustic cues are integrated in word segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
45
1

Year Published

2013
2013
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(48 citation statements)
references
References 44 publications
2
45
1
Order By: Relevance
“…These findings therefore provide support for the hypothesis that Armstrong produced the word a in his famous quote upon the lunar landing, but that the acoustic and contextual cues which would typically support perception of the word a were ambiguous or missing (Banzina & Dilley, 2010;Heffner & Dilley, 2011;Heffner et al, 2012). These data also provide further evidence that function word reduction can lead to substantial acoustic ambiguity about the presence of a word, as shown by Bell et al (2003) and Shockey (2003).…”
Section: Discussionsupporting
confidence: 64%
See 2 more Smart Citations
“…These findings therefore provide support for the hypothesis that Armstrong produced the word a in his famous quote upon the lunar landing, but that the acoustic and contextual cues which would typically support perception of the word a were ambiguous or missing (Banzina & Dilley, 2010;Heffner & Dilley, 2011;Heffner et al, 2012). These data also provide further evidence that function word reduction can lead to substantial acoustic ambiguity about the presence of a word, as shown by Bell et al (2003) and Shockey (2003).…”
Section: Discussionsupporting
confidence: 64%
“…Subsequent research has shown that listeners hear function words less often when there is substantial coarticulation between a function word and the preceding syllable than when there is a less coarticulation (Heffner, Dilley, McAuley, & Pitt, 2012). The rate of context speech syllables also influences whether a coarticulated function word is heard (Dilley & Pitt, 2010;Heffner et al, 2012;Vinke, Dilley, Banzina, & Henry, 2009;Banzina & Dilley, 2010).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…(references just representative): understanding speech in noise from 'intelligent interpretation' of wide-ranging spectro-temporal attributes of the signal [9][10][11][12], Gestalt-type processes of auditory scene analysis [13][14][15][16][17], online use of fine phonetic detail to facilitate access to meaning [18,19], listeners' temporary adaptation to accents and ambient conditions [20], perhaps partly via tuning of the outer hair cells [7], and the influence of context on how very casual speech is understood [21][22][23] including influences of speech rate and rhythm early in an utterance on the interpretation of words in later portions of the speech [24,25]. In both speech and music, pattern completion may explain the subjective experience of communicatively significant pulse when there is no event in the physical signal (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…For example, what is the influence of the durational (and more generally, phonetic) properties of the other vowels in the stimuli and do illusory vowel percepts increase/decrease depending on whether the surrounding vowels are shorter/longer? There is some reason to believe this should happen based on speech-rate effects on the parsing of ambiguous stimuli (Dilley, Morrill, & Banzina, 2013;Dilley & Pitt, 2010;Heffner, Dilley, McAuley, & Pitt, 2013). As Dilley et al, have consistently found, whole syllables can 'vanish' in ambiguous stimuli due to the distal/proximal speech rates.…”
Section: Discussionmentioning
confidence: 99%