Although there has been recent research on other media accessibility services such as audio description, there has been little focus on audio subtitling and the way subtitles are delivered orally. This article reports the outcome of an experiment in which 42 Spanish blind and partially sighted participants were exposed to two diverging audio subtitling strategies: audio subtitles with a voice-over effect and audio subtitles with a dubbing effect. Data on the users' emotional responses were collected through a tactile and simplified version of the SAM questionnaire and psychophysiological recordings of electrodermal activity and heart rate. The results obtained from both methods do not show statistically significant differences between the two effects. However, results from the questionnaire proved that emotions were induced in the participants calling for more research on the topic and with the application of such methods.
The Self-Assessment Manikin (SAM) is one of the most extensively used instruments in the situational assessment of the emotional state in experimental or clinical contexts of emotional induction. However, there is no instrument of this kind adapted for blind or visually impaired people. In this paper, we present the results of the preliminary validation of a tactile adaptation of the SAM, the Tactile Self-Assessment Manikin (T-SAM). For this purpose, 5 people with visual disabilities participated in a focus group in which the usability of this adaptation was evaluated, as well as its usefulness in representing the valence and arousal subscales of the original instrument. The analysis of the content of this focus group suggests a pertinent content validity, while the participants correctly understood both the purpose of the instrument, and the tactile representations of valence and activation constructs created by the research team. However, the difficulty of blind people from birth to understand the graphic representation of an emotional facial expression was detected, which represents a limitation to control in future steps in the validation of T-SAM.
Multilingualism in films has increased in recent productions as a reflection of today's globalised word. Different translation transfer modes such as dubbing or subtitling are combined to maintain the film's multilingual essence when translated into other languages. Within media accessibility, audio subtitles, an aurally-rendered version of written subtitles, is used to make access possible for audiences with vision or reading difficulties. By taking Sternberg's representation of polylingualism (1981), this article offers a categorisation of the strategies that may be used to reveal multilingualism in audiovisual content through audio subtitles similar to the way Szarkowska, Zbikowska, & Krejtz (2013) did with subtitles for the deaf and the hard of hearing. By taking a descriptive approach, two main strategies or effects for the delivery of audio subtitlesdubbing and voice-overare highlighted and explained. By combining these two effects with the information provided by the audio description, the levels of the categorisation are defined from more to less multilingualism-revealing: vehicular matching, selective reproduction, verbal transposition, explicit attribution and homogenising convention.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.