2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE) 2021
DOI: 10.1109/icse43902.2021.00047
|View full text |Cite
|
Sign up to set email alerts
|

Testing Machine Translation via Referential Transparency

Abstract: M achine translatio n software has seen ra p id progress in recent years due to the advancement o f deep neural networks. People rou tine ly use machine translation software in th e ir da ily lives fo r tasks such as ord erin g food in a foreign restaurant, receiving medical diagnosis and treatm ent fro m foreign doctors, and reading inte rna tiona l po litica l news online. However, due to the com plexity and in tra c ta b ility of the underlying neural networks, m odern machine translation software is s till… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 42 publications
(17 citation statements)
references
References 44 publications
0
17
0
Order By: Relevance
“…Behavioral Testing for MT A number of previous works Gupta et al, 2020;Sun et al, 2020;Wang et al, 2021;He et al, 2021) have tried to construct tests for eliciting errors in NMT systems' behavior. We present a thorough comparison of SALTED against these works along five dimensions in appendix G.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Behavioral Testing for MT A number of previous works Gupta et al, 2020;Sun et al, 2020;Wang et al, 2021;He et al, 2021) have tried to construct tests for eliciting errors in NMT systems' behavior. We present a thorough comparison of SALTED against these works along five dimensions in appendix G.…”
Section: Related Workmentioning
confidence: 99%
“…22: A comparison of existing Behavioral Testing Methods for NMT along five dimensions. The compared methods are: SIT , PatInv (Gupta et al, 2020), TransRepair (Sun et al, 2020) and RTI (He et al, 2021).…”
mentioning
confidence: 99%
“…These two tasks can be considered as translation tasks in general, and there is a need to quantify the uncertainty of the translated results, i.e., the model should not produce incorrect translation results if the output is uncertain. Indeed, numerous works have proposed techniques for testing such machine translation tasks for natural language processing [23,24,56], but these techniques focus exclusively on the erroneous in the translated sentences, rather than on their uncertainty. Additionally, the methods are limited to natural language processing, but our approach is suitable to code representation learning.…”
Section: Discussionmentioning
confidence: 99%
“…Word-level techniques [62,63] are based on word substitution, usually using synonyms sets or Pre-trained Language Models (PLMs). Sentence-level techniques [26] change the whole structure of the sentences either by adding a sentence to the original texts or transforming the entire texts into another semantically similar format. Those combining different levels of techniques [42] can be categorized into multi-level techniques.…”
Section: Preliminaries 21 Testing Techniques For Nlp Softwarementioning
confidence: 99%