Proceedings of the First ACM/SIGEVO Summit on Genetic and Evolutionary Computation 2009
DOI: 10.1145/1543834.1543972
|View full text |Cite
|
Sign up to set email alerts
|

Emotional speech synthesis by XML file using interactive genetic algorithms

Abstract: As a technique that can "let computer speak", speech synthesis is drawing more and more attention. Today, much speech synthesis software can synthesize neutral speech naturally and flowingly. However, it is hard to make computers speak with "emotion" as that in our daily life, because of the complexity of emotion model. Interactive Genetic Algorithms which can be acted self-organizingly, adaptively and self-learningly can just resolve the problem of difficulty in modeling emotional speech synthesis. As a resul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2009
2009
2020
2020

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…Their evaluation results show that a database with neutral semantic content should be used for emotional speech synthesis. Siliang et al introduce an emotional speech synthesis process, by adjusting the parameters (XML-tags) used to synthesise emotional speech dynamically, using interactive Genetic Algorithms [87]. For an overview on emotional speech synthesis and its practical applications see [88].…”
Section: A Speechmentioning
confidence: 99%
“…Their evaluation results show that a database with neutral semantic content should be used for emotional speech synthesis. Siliang et al introduce an emotional speech synthesis process, by adjusting the parameters (XML-tags) used to synthesise emotional speech dynamically, using interactive Genetic Algorithms [87]. For an overview on emotional speech synthesis and its practical applications see [88].…”
Section: A Speechmentioning
confidence: 99%
“…Whether the synthesized emotional speeches are understood by users and which emotion synthesizer is better are worthy of study. In this section, we assess two emotional speech synthesis algorithms, our algorithm [25], which uses an IGA to optimize prosody parameters, and the algorithm developed at MIT, the Affect Editor.…”
Section: Evaluation Experiments For Emotional Speech Synthesis Algorimentioning
confidence: 99%