2022
DOI: 10.1371/journal.pone.0273156
|View full text |Cite
|
Sign up to set email alerts
|

Bi-directional long short term memory-gated recurrent unit model for Amharic next word prediction

Abstract: The next word prediction is useful for the users and helps them to write more accurately and quickly. Next word prediction is vital for the Amharic Language since different characters can be written by pressing the same consonants along with different vowels, combinations of vowels, and special keys. As a result, we present a Bi-directional Long Short Term-Gated Recurrent Unit (BLST-GRU) network model for the prediction of the next word for the Amharic Language. We evaluate the proposed network model with 63,3… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…Memory (LSTM) and Bi-LSTM explored for the task of predicting the next word, and an accuracy of 59.46% and 81.07% was observed for LSTM and Bi-LSTM respectively. Endalie et al [13] present a Bi-directional Long Short Term-Gated Recurrent Unit (BLST-GRU) network model for the prediction of the next word for the Amharic Language. They evaluated the proposed network model with 63,300 Amharic sentences, producing 78.6% accuracy.…”
Section: Sharma N Goel Used Two Deep Learning Techniques Namely Long ...mentioning
confidence: 99%
“…Memory (LSTM) and Bi-LSTM explored for the task of predicting the next word, and an accuracy of 59.46% and 81.07% was observed for LSTM and Bi-LSTM respectively. Endalie et al [13] present a Bi-directional Long Short Term-Gated Recurrent Unit (BLST-GRU) network model for the prediction of the next word for the Amharic Language. They evaluated the proposed network model with 63,300 Amharic sentences, producing 78.6% accuracy.…”
Section: Sharma N Goel Used Two Deep Learning Techniques Namely Long ...mentioning
confidence: 99%
“…Because LSTMs may store information from past sequence inputs in the current input state, they have proven a natural option for data applications such as speech recognition, language modeling, and trial option (Niu and Srivastava, 2022). An LSTM has a hidden layer, an input layer, and an output layer (Endalie et al, 2022). The hidden state in a forward LSTM network only saves information from the past.…”
mentioning
confidence: 99%