2020
DOI: 10.1016/j.neucom.2020.01.003
|View full text |Cite
|
Sign up to set email alerts
|

Bi-Decoder Augmented Network for Neural Machine Translation

Abstract: Neural Machine Translation (NMT) has become a popular technology in recent years, and the encoderdecoder framework is the mainstream among all the methods. It's obvious that the quality of the semantic representations from encoding is very crucial and can significantly affect the performance of the model. However, existing unidirectional source-to-target architectures may hardly produce a language-independent representation of the text because they rely heavily on the specific relations of the given language p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 25 publications
0
1
0
Order By: Relevance
“…Gated recurrent unit (GRU) employment of [91] and parts of speech tagging of [92] in attention mechanism might also be useful to employ in Bangla MT. In the line of data augmentation, input denoising plus auxiliary decoder investigated in [93] and selflearning, training with synthetically generated data using monolingual a source language corpus, investigated in [94], are also intuitive to improve MT performance for a lowresource language like Bangla. Multi-source translation, an approach to exploit multiple inputs (e.g., in two different languages) to increase performance, and missing data management investigated by Nishimura et al [95] might also be a way to achieve better Bangla MT performance.…”
Section: Future Prospects Of Bangla Mt From This Studymentioning
confidence: 99%
“…Gated recurrent unit (GRU) employment of [91] and parts of speech tagging of [92] in attention mechanism might also be useful to employ in Bangla MT. In the line of data augmentation, input denoising plus auxiliary decoder investigated in [93] and selflearning, training with synthetically generated data using monolingual a source language corpus, investigated in [94], are also intuitive to improve MT performance for a lowresource language like Bangla. Multi-source translation, an approach to exploit multiple inputs (e.g., in two different languages) to increase performance, and missing data management investigated by Nishimura et al [95] might also be a way to achieve better Bangla MT performance.…”
Section: Future Prospects Of Bangla Mt From This Studymentioning
confidence: 99%