Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.472
|View full text |Cite
|
Sign up to set email alerts
|

Language-Agnostic Meta-Learning for Low-Resource Text-to-Speech with Articulatory Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(5 citation statements)
references
References 0 publications
0
5
0
Order By: Relevance
“…To accommodate a wider range of languages, recent studies have also focused on low-resource languages. Collecting a sufficient amount of high-quality paired training data for lowresource languages can be challenging; therefore, previous work has adapted a TTS model trained on resource-rich languages, to low-resource languages [5], [6], [7], [21]. This approach first pretrains a multilingual TTS model and then fine-tune it for low-resource languages, thereby improving performance by exploiting the multilingual knowledge embedded in the pretrained TTS model.…”
Section: A Low-resource Language Adaptation For Ttsmentioning
confidence: 99%
See 1 more Smart Citation
“…To accommodate a wider range of languages, recent studies have also focused on low-resource languages. Collecting a sufficient amount of high-quality paired training data for lowresource languages can be challenging; therefore, previous work has adapted a TTS model trained on resource-rich languages, to low-resource languages [5], [6], [7], [21]. This approach first pretrains a multilingual TTS model and then fine-tune it for low-resource languages, thereby improving performance by exploiting the multilingual knowledge embedded in the pretrained TTS model.…”
Section: A Low-resource Language Adaptation For Ttsmentioning
confidence: 99%
“…While this method requires an ASR model trained on speech corpora of the target language, our approach adapts language-aware embedding layer only using text data with MLM objectives. Previous multilingual TTS work has focused on tokens, including bytes [24], [7], [9], [11], IPA symbols [25], [26], [11], articulatory features [21], sentencepiece tokens [9], etc. Our work proposes a graphone-based training method for multilingual low-resource TTS, which allows the flexible use of both graphemes and IPA symbols for each language.…”
Section: B Other Multilingual Low-resource Tts Approachesmentioning
confidence: 99%
“…The relative relationships are more reliable in the AMOS. Vu, 2022], to transfer knowledge from resource-rich to lowresource languages. Grapheme tokens can eliminate the perlanguage G2P knowledge, and previous work has built a bytebased TTS model for around 40 languages .…”
Section: Related Workmentioning
confidence: 99%
“…Recently, optimization-based techniques yield substantial improvement in many low-resource NLP tasks (Zhao et al, 2022). Among them, Model-Agnostic Meta-Learning (MAML) (Finn et al, 2017) has been widely used to tackle low-resource NLP tasks such as machine translation (Gu et al, 2018;Park et al, 2021), dialogue generation (Mi et al, 2019;, and text-to-speech alignment (Lux and Vu, 2022). MAML has shown exceptional efficacy in learning a good parameter initialization for a fast adaption with limited resources.…”
Section: Related Workmentioning
confidence: 99%