Shared tasks are increasingly common in our field, and new challenges are suggested at almost every conference and workshop. However, as this has become an established way of pushing research forward, it is important to discuss how we researchers organise and participate in shared tasks, and make that information available to the community to allow further research improvements. In this paper, we present a number of ethical issues along with other areas of concern that are related to the competitive nature of shared tasks. As such issues could potentially impact on research ethics in the Natural Language Processing community, we also propose the development of a framework for the organisation of and participation in shared tasks that can help mitigate against these issues arising.
Machine Translation (MT) quality is typically assessed using automatic evaluation metrics such as BLEU and TER. Despite being generally used in the industry for evaluating the usefulness of Translation Memory (TM) matches based on text similarity, fuzzy match values are not as widely used for this purpose in MT evaluation. We designed an experiment to test if this fuzzy score applied to MT output stands up against traditional methods of MT evaluation. The results obtained seem to confirm that this metric performs at least as well as traditional methods for MT evaluation.
This paper reports on the organization and results of the rst Automatic\ud
Translation Memory Cleaning Shared Task. This shared task is aimed\ud
at nding automatic ways of cleaning translation memories (TMs) that have\ud
not been properly curated and thus include incorrect translations. As a follow\ud
up of the shared task, we also conducted two surveys, one targeting the teams\ud
participating in the shared task, and the other one targeting professional translators.\ud
While the researchers-oriented survey aimed at gathering information\ud
about the opinion of participants on the shared task, the translators-oriented\ud
survey aimed to better understand what constitutes a good TM unit and inform\ud
decisions that will be taken in future editions of the task. In this paper, we\ud
report on the process of data preparation and the evaluation of the automatic\ud
systems submitted, as well as on the results of the collected surveys
Normes et usages en anglais de spécialité La traduction automatique comme outil d'aide à la rédaction scientifique en anglais langue seconde : résultats d'une étude exploratoire sur la qualité linguistique Using machine translation for academic writing in English as a second language: results of an exploratory study on linguistic quality
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.