Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection 2017
DOI: 10.18653/v1/k17-2005
|View full text |Cite
|
Sign up to set email alerts
|

Morphological Inflection Generation with Multi-space Variational Encoder-Decoders

Abstract: This paper describes the CMU submission to shared task 1 of SIGMORPHON 2017.The system is based on the multi-space variational encoder-decoder (MSVED) method of Zhou and Neubig (2017), which employs both continuous and discrete latent variables for the variational encoder-decoder and is trained in a semi-supervised fashion. We discuss some language-specific errors and present result analysis.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 20 publications
(15 citation statements)
references
References 8 publications
0
15
0
Order By: Relevance
“…We instead achieve state-of-the-art with a cheaper approach that simply intermixes a copying task which also encourages monotonicity. Data augmentation for inflection has been explored by Bergmanis et al (2017) and Zhou and Neubig (2017) among others. The work of Silfverberg et al (2017) is the most similar to ours, but as we already discussed, it has a few shortcomings that our approach addresses.…”
Section: Related Workmentioning
confidence: 99%
“…We instead achieve state-of-the-art with a cheaper approach that simply intermixes a copying task which also encourages monotonicity. Data augmentation for inflection has been explored by Bergmanis et al (2017) and Zhou and Neubig (2017) among others. The work of Silfverberg et al (2017) is the most similar to ours, but as we already discussed, it has a few shortcomings that our approach addresses.…”
Section: Related Workmentioning
confidence: 99%
“…In the medium setting, the difference in accuracy is much more apparent. This is due to the fact that all of the top performing systems in the shared task also used either some type of data augmentation method (Zhou and Neubig (2017), Silfverberg et al (2017), Sudhakar and Kumar (2017), , Bergmanis et al (2017)) a hard alignment method (Makarov et al, 2017), or both (Nicolai et al, 2017). These results illustrate the common observation that neural systems require a large amount of data to be very accurate, which can be partially addressed by artificially expanding the training data, or enforcing some copy bias into the system.…”
Section: Experiments and Resultsmentioning
confidence: 87%
“…Silfverberg et al (2017) employ a data augmentation system splitting a word in three parts -inflectional prefix, word stem and inflectional suffixand then generating new words using existing preand suffixes. Further works using data augmentation are provided by Zhou and Neubig (2017) and Nicolai et al (2017).…”
Section: Enhancing Training Datamentioning
confidence: 99%