2019 18th IEEE International Conference on Machine Learning and Applications (ICMLA) 2019
DOI: 10.1109/icmla.2019.00142
|View full text |Cite
|
Sign up to set email alerts
|

Investigating Input and Output Units in Diacritic Restoration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 20 publications
0
7
0
Order By: Relevance
“…Zalmout and Habash (2019a) obtained an additional boost in performance (0.3% improvement over ours) when they add a dialect variant of Arabic in the learning process, sharing information between both languages. Alqahtani and Diab (2019a) provides comparable performance to ALL and better performance on some task combinations in terms of WER on all and OOV words. The difference between their model and our BASE model is the addition of a CRF (Conditional Random Fields) layer which incorporate dependencies in the output space at the cost of model's computational efficiency (memory and speed).…”
Section: Input Representationmentioning
confidence: 97%
See 2 more Smart Citations
“…Zalmout and Habash (2019a) obtained an additional boost in performance (0.3% improvement over ours) when they add a dialect variant of Arabic in the learning process, sharing information between both languages. Alqahtani and Diab (2019a) provides comparable performance to ALL and better performance on some task combinations in terms of WER on all and OOV words. The difference between their model and our BASE model is the addition of a CRF (Conditional Random Fields) layer which incorporate dependencies in the output space at the cost of model's computational efficiency (memory and speed).…”
Section: Input Representationmentioning
confidence: 97%
“…Maximum Entropy and Support Vector Machine) (Zitouni and Sarikaya, 2009;Pasha et al, 2014) or neural based approaches for different languages that include diacritics such as Arabic, Vietnamese, and Yoruba. Neural based approaches yield stateof-the-art performance for diacritic restoration by using Bidirectional LSTM or temporal convolutional networks (Zalmout and Habash, 2017;Orife, 2018;Alqahtani and Diab, 2019a).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Zalmout and Habash (2019a) obtained an additional boost in performance (0.3% improvement over ours) when they add a dialect variant of Arabic in the learning process, sharing information between both languages. Alqahtani and Diab (2019a) provides comparable performance to ALL and better performance on some task combinations in terms of WER on all and OOV words. The difference between their model and our BASE model is the addition of a CRF (Conditional Random Fields) layer which incorporate dependencies in the output space at the cost of model's computational efficiency (memory and speed).…”
Section: Input Representationmentioning
confidence: 97%
“…Maximum Entropy and Support Vector Machine) (Zitouni and Sarikaya, 2009;Pasha et al, 2014) or neural based approaches for different languages that include diacritics such as Arabic, Vietnamese, and Yoruba. Neural based approaches yield stateof-the-art performance for diacritic restoration by using Bidirectional LSTM or temporal convolutional networks (Zalmout and Habash, 2017;Orife, 2018;Alqahtani and Diab, 2019a).…”
Section: Related Workmentioning
confidence: 99%