This paper describes a large-scale survey of machine translation (MT) competencies conducted by a non-commercial and publicly funded European research project. Firstly, we highlight the increased prevalence of translation technologies in the translation and localisation industry, and develop upon this by reporting on survey data derived from 438 validated respondents, including freelance translators, language service providers, translator trainers, and academics. We then focus on ascertaining the prevalence of translation technology usage on a fine-grained scale to address aspects of MT, quality assessment techniques and post-editing. We report a strong need for an improvement in quality assessment methods, tools, and training, partly due to the large variance in approaches and combinations of methods, and to the lack of knowledge and resources. We note the growing uptake of MT and the perceived increase of its prevalence in future workflows. We find that this adoption of MT has led to significant changes in the human translation process, in which post-editing appears to be exclusively used for high-quality content publication. Lastly, we echo the needs of the translation industry and community in an attempt to provide a more comprehensive snapshot to inform the provision of translation training and the need for increased technical competencies.
In this paper we argue that the time is ripe for translator educators to engage with Statistical Machine Translation (SMT) in more profound ways than they have done to date. We explain the basic principles of SMT and reflect on the role of humans in SMT workflows. Against a background of diverging opinions on the latter, we argue for a holistic approach to the integration of SMT into translator training programmes, one that empowers rather than marginalises translators. We discuss potential barriers to the use of SMT by translators generally and in translator training in particular, and propose some solutions to problems thus identified. More specifically, cloud-based services are proposed as a means of overcoming some of the technical and ethical challenges posed by more advanced uses of SMT in the classroom. Ultimately the paper aims to pave the way for the design and implementation of a new translator-oriented SMT syllabus at our own University and elsewhere.
The use of video has become well established in education, from traditional courses to blended and online courses. It has grown both in its diversity of applications as well as its content. Such educational video however is not fully accessible to all students, particularly those who require additional visual support or students studying in a foreign language. Subtitles (also known as captions) represent a unique solution to these language and accessibility barriers, however, the impact of subtitles on cognitive load in such a rich and complex multimodal environment has yet to be determined. Cognitive load is a complex construct and its measurement by means of single indirect and unidimensional methods is a severe methodological limitation. Building upon previous work from several disciplines, this paper moves to establish a multimodal methodology for the measurement of cognitive load in the presence of educational video. We show how this methodology, with refinement, can allow us to determine the effectiveness of subtitles as a learning support in educational contexts. This methodology will also make it possible to analyse the impact of other multimedia learning technology on cognitive load.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.