Research has shown that adults’ lexical-semantic representations are surprisingly malleable. For instance, the interpretation of ambiguous words (e.g., bark) is influenced by experience such that recently encountered meanings become more readily available (Rodd et al., 2016, 2013). However, the mechanism underlying this word-meaning priming effect remains unclear, and competing accounts make different predictions about the extent to which information about word meanings that is gained within one modality (e.g., speech) is transferred to the other modality (e.g., reading) to aid comprehension. In two Web-based experiments, ambiguous target words were primed with either written or spoken sentences that biased their interpretation toward a subordinate meaning, or were unprimed. About 20 min after the prime exposure, interpretation of these target words was tested by presenting them in either written or spoken form, using word association (Experiment 1, N = 78) and speeded semantic relatedness decisions (Experiment 2, N = 181). Both experiments replicated the auditory unimodal priming effect shown previously (Rodd et al., 2016, 2013) and revealed significant cross-modal priming: primed meanings were retrieved more frequently and swiftly across all primed conditions compared with the unprimed baseline. Furthermore, there were no reliable differences in priming levels between unimodal and cross-modal prime-test conditions. These results indicate that recent experience with ambiguous word meanings can bias the reader’s or listener’s later interpretation of these words in a modality-general way. We identify possible loci of this effect within the context of models of long-term priming and ambiguity resolution.
Current models of word-meaning access typically assume that lexical-semantic representations of ambiguous words (e.g., ‘bark of the dog/tree’) reach a relatively stable state in adulthood, with only the relative frequencies of meanings and immediate sentence context determining meaning preference. However, recent experience also affects interpretation: recently encountered word-meanings become more readily available (Rodd et al., 2016, 2013). Here, 3 experiments investigated how multiple encounters with word-meanings influence the subsequent interpretation of these ambiguous words. Participants heard ambiguous words contextually-disambiguated towards a particular meaning and, after a 20- to 30-min delay, interpretations of the words were tested in isolation. We replicate the finding that 1 encounter with an ambiguous word biased the later interpretation of this word towards the primed meaning for both subordinate (Experiments 1, 2, 3) and dominant meanings (Experiment 1). In addition, for the first time, we show cumulative effects of multiple repetitions of both the same and different meanings. The effect of a single subordinate exposure persisted after a subsequent encounter with the dominant meaning, compared to a dominant exposure alone (Experiment 1). Furthermore, 3 subordinate word-meaning repetitions provided an additional boost to priming compared to 1, although only when their presentation was spaced (Experiments 2, 3); massed repetitions provided no such boost (Experiments 1, 3). These findings indicate that comprehension is guided by the collective effect of multiple recently activated meanings and that the spacing of these activations is key to producing lasting updates to the lexical-semantic network.
Semantically ambiguous words challenge speech comprehension, particularly when listeners must select a less frequent (subordinate) meaning at disambiguation. Using combined magnetoencephalography (MEG) and EEG, we measured neural responses associated with distinct cognitive operations during semantic ambiguity resolution in spoken sentences: (i) initial activation and selection of meanings in response to an ambiguous word and (ii) sentence reinterpretation in response to subsequent disambiguation to a subordinate meaning. Ambiguous words elicited an increased neural response approximately 400–800 msec after their acoustic offset compared with unambiguous control words in left frontotemporal MEG sensors, corresponding to sources in bilateral frontotemporal brain regions. This response may reflect increased demands on processes by which multiple alternative meanings are activated and maintained until later selection. Disambiguating words heard after an ambiguous word were associated with marginally increased neural activity over bilateral temporal MEG sensors and a central cluster of EEG electrodes, which localized to similar bilateral frontal and left temporal regions. This later neural response may reflect effortful semantic integration or elicitation of prediction errors that guide reinterpretation of previously selected word meanings. Across participants, the amplitude of the ambiguity response showed a marginal positive correlation with comprehension scores, suggesting that sentence comprehension benefits from additional processing around the time of an ambiguous word. Better comprehenders may have increased availability of subordinate meanings, perhaps due to higher quality lexical representations and reflected in a positive correlation between vocabulary size and comprehension success.
Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.