6th International Conference on Spoken Language Processing (ICSLP 2000) 2000
DOI: 10.21437/icslp.2000-513
|View full text |Cite
|
Sign up to set email alerts
|

Generating prosody by superposing multi-parametric overlapping contours

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2002
2002
2017
2017

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 8 publications
0
3
0
Order By: Relevance
“…Few attempts have been reported to learn automatically the actual shapes of embedded contours. Holm et al [15,16] propose an analysis-by-synthesis method for decomposing an f0curve under high-level constraints ( §3.2).…”
Section: Contour Shapesmentioning
confidence: 99%
“…Few attempts have been reported to learn automatically the actual shapes of embedded contours. Holm et al [15,16] propose an analysis-by-synthesis method for decomposing an f0curve under high-level constraints ( §3.2).…”
Section: Contour Shapesmentioning
confidence: 99%
“…General-purpose contour generators have been developed in order to be able to generate a coherent family of contours given only their scope. These contour generators are actually implemented as simple feedforward neural networks [13] receiving as input linear ramps giving the absolute and relative distance of the current syllable from the closest landmarks and delivering as output the prosodic characteristics for the current syllable (see Figure 1). Each network have very few parameters -typically 4 input, 15 hidden and 4 output units = 4*(15+1)+15*(4+1) = 139 parameters -to be compared to the thousands parameters necessary to learn a "blind" mapping between phonological inputs and prosodic parameters such as in [6,24].…”
Section: Contour Generatorsmentioning
confidence: 99%
“…Both models simulated efficiently the utterances of the original corpus by applying the first three principles. Recently, Holm and Bailly [15] have applied this to a corpus of read mathematical formulae, in utterances where the segmentation function (mathematical symbols) exactly coincides with the semantic content, and where this content can occur at whatever level. Their results confirm the independence hypothesis.…”
Section: 4 Principlementioning
confidence: 99%