Language use can be viewed as a form of joint activity that requires the coordination of meaning between individuals. Because the linguistic signal is notoriously ambiguous, interlocutors need to draw upon additional sources of information to resolve ambiguity and achieve shared understanding. One way individuals can achieve coordination is by using inferences about the interlocutor's intentions and mental states to adapt their behavior. However, such an inferential process can be demanding in terms of both time and cognitive resources. Here, we suggest that interaction provides interlocutors with many cues that can support coordination of meaning, even when they are neither produced intentionally for that purpose nor interpreted as signaling speakers' intention. In many circumstances, interlocutors can take advantage of these cues to adapt their behavior in ways that promote coordination, bypassing the need to resort to deliberative inferential processes.
During a conversation, we hear the sound of the talker as well as the intended message. Traditional models of speech perception posit that acoustic details of a talker's voice are not encoded with the message whereas more recent models propose that talker identity is automatically encoded. When shadowing speech, listeners often fail to detect a change in talker identity. The present study was designed to investigate whether talker changes would be detected when listeners are actively engaged in a normal conversation, and visual information about the speaker is absent. Participants were called on the phone, and during the conversation the experimenter was surreptitiously replaced by another talker. Participants rarely noticed the change. However, when explicitly monitoring for a change, detection increased. Voice memory tests suggested that participants remembered only coarse information about both voices, rather than fine details. This suggests that although listeners are capable of change detection, voice information is not continuously monitored at a fine-grain level of acoustic representation during natural conversation and is not automatically encoded. Conversational expectations may shape the way we direct attention to voice characteristics and perceive differences in voice.
Repeated reference creates strong expectations in addressees that a speaker will continue to use the same expression for the same object. The authors investigate the root reason for these expectations by comparing a cooperativeness-based account (Grice, 1975) with a simpler consistency-based account. In two eye-tracking experiments, the authors investigated the expectations underlying the effect of precedents on comprehension. The authors show that listeners expect speakers to be consistent in their use of expressions even when these expectations cannot be motivated by the assumption of cooperativeness. The authors conclude that though this phenomenon seems to be motivated by cooperativeness, listeners' expectation that speakers be consistent in their use of expressions is governed by a general expectation of consistency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.