2022
DOI: 10.48550/arxiv.2204.00990
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Content-Dependent Fine-Grained Speaker Embedding for Zero-Shot Speaker Adaptation in Text-to-Speech Synthesis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 0 publications
0
1
0
Order By: Relevance
“…In the domain of speaker voice transfer, the goal is to convert the speech of a source speaker into that of a target speaker while preserving linguistic content. This task can be accomplished through various approaches such as text-to-speech (Zhou et al, 2022;Huang et al, 2022a;Huang et al, 2022b;Liu et al, 2022;Nguyen and Cardinaux, 2022) and voice conversion (VC) (Qian et al, 2020;Wang and Borth, 2022;Lee et al, 2022;Casanova et al, 2022), both of which fundamentally involve disentangling speaker-specific timbre from speech, thereby allowing for an arbitrary combination of timbral qualities and linguistic content (Zhou et al, 2022;Chen and Rudnicky, 2022;Hsu et al, 2018). Synthesizing Cambodian speech with distinct timbre characteristics plays a pivotal role in augmenting datasets for tasks such as speech synthesis and recognition as well as extensive practical applications, including personalized intelligent speech customization, voice dubbing for movies and games, online education and smart home systems.…”
Section: Introductionmentioning
confidence: 99%
“…In the domain of speaker voice transfer, the goal is to convert the speech of a source speaker into that of a target speaker while preserving linguistic content. This task can be accomplished through various approaches such as text-to-speech (Zhou et al, 2022;Huang et al, 2022a;Huang et al, 2022b;Liu et al, 2022;Nguyen and Cardinaux, 2022) and voice conversion (VC) (Qian et al, 2020;Wang and Borth, 2022;Lee et al, 2022;Casanova et al, 2022), both of which fundamentally involve disentangling speaker-specific timbre from speech, thereby allowing for an arbitrary combination of timbral qualities and linguistic content (Zhou et al, 2022;Chen and Rudnicky, 2022;Hsu et al, 2018). Synthesizing Cambodian speech with distinct timbre characteristics plays a pivotal role in augmenting datasets for tasks such as speech synthesis and recognition as well as extensive practical applications, including personalized intelligent speech customization, voice dubbing for movies and games, online education and smart home systems.…”
Section: Introductionmentioning
confidence: 99%