2007
DOI: 10.1109/msp.2007.323274
|View full text |Cite
|
Sign up to set email alerts
|

Corpus-Based Concatenative Synthesis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
71
0
5

Year Published

2007
2007
2021
2021

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 85 publications
(76 citation statements)
references
References 11 publications
0
71
0
5
Order By: Relevance
“…Corpus-based concatenative synthesis has been implement in the software CataRT, with its associated signal processing package FTM & Co., by Diemo Schwarz and the Sound Music Movement Interaction team at IRCAM (Schwarz 2007). CataRT allows for the analysis and segmentation of a database of samples recorded live or in deferred time, the corpus, and resynthesis through to a variety of control paradigms.…”
Section: Concatenative Synthesismentioning
confidence: 99%
“…Corpus-based concatenative synthesis has been implement in the software CataRT, with its associated signal processing package FTM & Co., by Diemo Schwarz and the Sound Music Movement Interaction team at IRCAM (Schwarz 2007). CataRT allows for the analysis and segmentation of a database of samples recorded live or in deferred time, the corpus, and resynthesis through to a variety of control paradigms.…”
Section: Concatenative Synthesismentioning
confidence: 99%
“…Pruning spurious units improves the TTS output [37,[47][48][49][50][51] while pruning redundant units reduces database size thus enabling portability [52][53][54] and real-time concatenative synthesis [2,55,56]. In this work, we focus on removing spurious units and not redundant units.…”
Section: Data Pruning Using Confidence Measuresmentioning
confidence: 99%
“…The user can also rely on two algorithms to automatically dispose the objects. The first one, inspired by cataRT software [18], calculates the objects' positions and colors according to descriptors chosen by the user. The second calculates the positions depending on a sample of objects selected by the user.…”
Section: How Could Idda and Big Data Help Supporting Our Figural Comentioning
confidence: 99%