2022
DOI: 10.1007/978-981-19-2281-7_55
|View full text |Cite
|
Sign up to set email alerts
|

Machine Translation for Indian Languages Utilizing Recurrent Neural Networks and Attention

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 9 publications
0
1
0
Order By: Relevance
“…The essence of NMT consists of two key elements: the "encoder" and the "decoder". The input texts are transformed into a context vector (c) by the encoder , and subsequently, the decoder processes this vector to produce single word at a time for the output sentence with a length of m. Unlike other machine translation approaches, NMT requires minimal domain expertise [12]. The encoder-decoder model for NMT can be represented in a block diagram with figure 5.…”
Section: Neural Machine Translation (Nmt)mentioning
confidence: 99%
“…The essence of NMT consists of two key elements: the "encoder" and the "decoder". The input texts are transformed into a context vector (c) by the encoder , and subsequently, the decoder processes this vector to produce single word at a time for the output sentence with a length of m. Unlike other machine translation approaches, NMT requires minimal domain expertise [12]. The encoder-decoder model for NMT can be represented in a block diagram with figure 5.…”
Section: Neural Machine Translation (Nmt)mentioning
confidence: 99%