The role of visual feedback during the production of American Sign Language was investigated by comparing the size of signing space during conversations and narrative monologues for normally sighted signers, signers with tunnel vision due to Usher syndrome, and functionally blind signers. The interlocutor for all groups was a normally sighted deaf person. Signers with tunnel vision produced a greater proportion of signs near the face than blind and normally sighted signers, who did not differ from each other. Both groups of visually impaired signers produced signs within a smaller signing space for conversations than for monologues, but we hypothesize that they did so for different reasons. Signers with tunnel vision may align their signing space with that of their interlocutor. In contrast, blind signers may enhance proprioceptive feedback by producing signs within an enlarged signing space for monologues, which do not require switching between tactile and visual signing. Overall, we hypothesize that signers use visual feedback to phonetically calibrate the dimensions of signing space, rather than to monitor language output.
In American Sign Language (ASL), a receiver watches the signer and receives language visually. In contrast, when using tactile ASL, a variety of ASL, the deaf-blind receiver receives language by placing a hand on top of the signerâs hand. In the study described in this article we compared the functions and frequency of the signs YES and #NO in tactile ASL and visual ASL. We found that YES and/or #NO were used for twelve functions in both. There was, however, some variation. In one environment YES occurred in tactile ASL but not in visual ASL. With regard to frequency, the two signs occurred far more often in tactile ASL. Unexpectedly, significant variation was also found within visual ASL, depending on the number of interviewees in a session. YES and #NO were used more frequently with two or more interviewees and less often when only one interviewee was present. These findings led us to the concept of a "visibility continuum" to account for the variation between visual and tactile ASL, as well as for the variation within visual ASL. The data also reveal variation in tactile ASL that correlates with role and gender, as well as the age at which a participant started using tactile ASL (i.e., similar to age-of-acquisition effects).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.