This article explains how interpreters for deaf-blind people co ordinate and express turn-taking signals in an interpreted dialogue. Empirical materials are derived from a video-ethnographic study of an interpreted-mediated board meeting with five deaf-blind participants. The results show that the interpreters provide access to visual and auditory signals for orientation and attention, exchange miniresponse signals, and actively take part in the negotiation of turns. As a result of these action patterns, a sequential order of interaction is established in the dialogue, and despite their inability to see or hear one another, the board members participate actively, and communication flows.
This article reports on a linguistic study examining the use of real space blending in the tactile signed languages of Norwegian and Swedish signers who are both deaf and blind. Tactile signed languages are typically produced by interactants in contact with each other's hands while signing. Of particular interest to this study are utterances which not only consist of the signer producing signs with his or her own hands (or other body parts), but which also recruit the other interactant's hands (or another body part). These utterances, although perhaps less frequent, are co-constructed, in a very real sense, and they illustrate meaning construction during emerging, embodied discourse. Here, we analyze several examples of these types of utterances from a cognitive linguistic and cognitive semiotic perspective to explore how interactants prompt meaning construction through touch and the involvement of each other's bodies during a particular type of co-regulation.
This article focuses on how to provide environmental descriptions of the context with the intent of creating access to information and dialogical participation for deafblind persons. Multimodal interaction is needed to communicate with deafblind persons whose combined sensory loss impedes their access to the environment and ongoing interaction. Empirical data of interpreting for deafblind persons are analyzed to give insight into how this task may be performed. All communicative activities vary due to their context, participants, and aim. In this study, our data are part of a cross-linguistic study of tactile sign language and were gathered during a guided tour for a deafblind group. The guided tour was tailored to a specific group (adult deafblind tactile signers and their interpreters) visiting one of the oldest cathedrals and pilgrim sites in Scandinavia, with interpreters following up the guide’s presentation and providing descriptions based on the given situation. The tour and the interpreters’ work were videotaped, and the ongoing interaction and communication have been studied through video-ethnographic methods and conversational analysis. The data have been investigated for the research question: What elements are involved in descriptions to provide deafblind individuals access to their environments? Theories from multimodality communicative studies are relevant for the ways tactile descriptions are presented and analyzed. Some of this is an investigation at a microlevel of interaction. An overall inspiration for this study is interaction studies with data from authentic formal and informal conversations and ways of analyzing embodied action and situated gestures in studies of human interaction. Also, concepts of “frontstage,” “backstage,” and “main conversation” are brought into our interpreter-mediated data to follow the role of building meaning in complex conversations. Theories on interaction are used in the analyses to illustrate the participating framework between the guide, the interpreter, the deafblind person, and the situated frame of their interaction. The study opens for a broader understanding of the repertoire of multimodal interaction and how such interaction may be handled as inputs in communication processes. This is of relevance for communication with deafblind persons, for professionals meeting blind and deafblind clients, and for knowledge of multimodal interaction in general.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.