Research on the exploitation of prosodic information in the comprehension of spoken language is reviewed. The research falls into three main areas: the use of prosody in the recognition of spoken words, in which most attention has been paid to the question of whether the prosodic structure of a word plays a role in initial activation of stored lexical representations; the use of prosody in the computation of syntactic structure, in which the resolution of global and local ambiguities has formed the central focus; and the role of prosody in the processing of discourse structure, in which there has been a preponderance of work on the contribution of accentuation and deaccentuation to integration of concepts with an existing discourse model. The review reveals that in each area progress has been made towards new conceptions of prosody's role in processing, and in particular this has involved abandonment of previously held deterministic views of the relationship between prosodic structure and other aspects of linguistic structure.
Participants' eye movements were monitored as they heard sentences and saw four pictured objects on a computer screen. Participants were instructed to click on the object mentioned in the sentence. There were more transitory fixations to pictures representing monosyllabic words (e.g. ham) when the first syllable of the target word (e.g. hamster) had been replaced by a recording of the monosyllabic word than when it came from a different recording of the target word. This demonstrates that a phonemically identical sequence can contain cues that modulate its lexical interpretation. This effect was governed by the duration of the sequence, rather than by its origin (i.e. which type of word it came from). The longer the sequence, the more monosyllabic-word interpretations it generated. We argue that cues to lexical-embedding disambiguation, such as segmental lengthening, result from the realization of a prosodic boundary that often but not always follows monosyllabic words, and that lexical candidates whose word boundaries are aligned with prosodic boundaries are favored in the word-recognition process. q
Participants' eye movements were monitored as they followed spoken instructions to click on a pictured object with a computer mouse (e.g., ''click on the net''). Participants were slower to xate the target picture when the onset of the target word came from a competitor word (e.g., ne(ck)t) than from a nonword (e.g., ne(p)t), as predicted by models of spoken-word recognition that incorporate lexical competition. This was found whether the picture of the competitor word (e.g., the picture of a neck) was present on the display or not. Simulations with the TRACE model captured the major trends of xations to the target and its competitor over time. We argue that eye movements provide a ne-grained measure of lexical activation over time, and thus reveal effects of lexical competition that are masked by response measures such as lexical decisions.
The role of accent in reference resolution was investigated by monitoring eye fixations to lexical competitors (e.g., candy and candle) as participants followed prerecorded instructions to move objects above or below fixed geometric shapes using a computer mouse.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.