2022
DOI: 10.1155/2022/5483535
|View full text |Cite
|
Sign up to set email alerts
|

Adding Visual Information to Improve Multimodal Machine Translation for Low-Resource Language

Abstract: Machine translation makes it easy for people to communicate across languages. Multimodal machine translation is also one of the important directions of research in machine translation, which uses feature information such as images and audio to assist translation models in obtaining higher quality target languages. However, in the vast majority of current research work has been conducted on the basis of commonly used corpora such as English, French, German, less research has been done on low-resource languages,… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 17 publications
0
1
0
Order By: Relevance
“…Then, using data from the target low-resource language n L only, the model is fine-tuned in a targeted way to better capture the nuances and features specific to the target language. Each finetuning follows the standard parameter update rule, i.e., the model parameters are updated according to the loss function ( ; , ) Loss x y  to minimize the loss on the corresponding training set via gradient descent or other optimization algorithms [24].…”
Section: Fine-tuning Strategies and Low-resource Translation Performa...mentioning
confidence: 99%
“…Then, using data from the target low-resource language n L only, the model is fine-tuned in a targeted way to better capture the nuances and features specific to the target language. Each finetuning follows the standard parameter update rule, i.e., the model parameters are updated according to the loss function ( ; , ) Loss x y  to minimize the loss on the corresponding training set via gradient descent or other optimization algorithms [24].…”
Section: Fine-tuning Strategies and Low-resource Translation Performa...mentioning
confidence: 99%
“…Further, the text-based model was trained on a larger bilingual corpus, which could be a major contributing factor to better results. Multimodal Transformer architecture was adopted for English-Hindi [63], which improved the results compared to its UNMT baseline. There are other works that utilized multiple captions from the dataset to train the models [65].…”
Section: Multimodal Translation On Indian Languagesmentioning
confidence: 99%