Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology 2020
DOI: 10.18653/v1/2020.sigmorphon-1.23
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Neural Morphological Taggers for Sanskrit

Abstract: Neural sequence labelling approaches have achieved state of the art results in morphological tagging. We evaluate the efficacy of four standard sequence labelling models on Sanskrit, a morphologically rich, fusional Indian language. As its label space can theoretically contain more than 40,000 labels, systems that explicitly model the internal structure of a label are more suited for the task, because of their ability to generalise to labels not seen during training. We find that although some neural models pe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(12 citation statements)
references
References 18 publications
0
12
0
Order By: Relevance
“…1,500 and 1,000 sentences from STBC, other than the ones in test data, were used as the training and validation data respectively for DCST, DCST++, and BiAFF. However all the EBM models and YAP were trained on 12,320 sentences obtained by augmenting the training data in STBC (Krishna et al, 2020 (Krishna et al, 2020), they perform similar, with a small improvement of 0.77 points for MG-EBM .…”
Section: Experimental Frameworkmentioning
confidence: 94%
See 4 more Smart Citations
“…1,500 and 1,000 sentences from STBC, other than the ones in test data, were used as the training and validation data respectively for DCST, DCST++, and BiAFF. However all the EBM models and YAP were trained on 12,320 sentences obtained by augmenting the training data in STBC (Krishna et al, 2020 (Krishna et al, 2020), they perform similar, with a small improvement of 0.77 points for MG-EBM .…”
Section: Experimental Frameworkmentioning
confidence: 94%
“…For the joint morphosyntactic setting, we propose DCST++ as a neural baseline. DCST++ is our augmentation over DCST which integrates encoder outputs from a neural morphological tagger (Gupta et al, 2020) by a gating mechanism (Sato et al, 2017). 5 Metric: All the results we report are macro averaged at a sentence level.…”
Section: Mg-ebm: the Proposed Modelmentioning
confidence: 99%
See 3 more Smart Citations