2018
DOI: 10.1162/comj_a_00450
|View full text |Cite
|
Sign up to set email alerts
|

Vocal Control of Sound Synthesis Personalized by Unsupervised Machine Listening and Learning

Abstract: This article describes a user-driven adaptive method for controlling the sonic response of digital musical instruments with information extracted from the timbre of the human voice. The mapping between heterogeneous attributes of the input and output timbres is determined from data collected via machine listening techniques and then processed by unsupervised machine learning algorithms. This approach is based on a minimum-loss mapping which hides any synthesizer-specific parameters, and maps the vocal interact… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…The seamless integra-tion of the different components of the system further simplifies the procedure to compute personalized mappings through the timbre space, according to users' preferences and cus-tomized for their specific synthesizers. The proposed timbre space mapping meth-od provided satisfactory performances (evalu-ated in Fasciani, 2014;Fasciani & Wyse, 2015). We measured quantitative metrics such as the percentage of retrievable synthesis parameter permutations, parameter continuity, and tim-bre space losses over an extensive set of syn-thesizers.…”
Section: Interactive and Optimized Timbre Space Computationmentioning
confidence: 99%
See 1 more Smart Citation
“…The seamless integra-tion of the different components of the system further simplifies the procedure to compute personalized mappings through the timbre space, according to users' preferences and cus-tomized for their specific synthesizers. The proposed timbre space mapping meth-od provided satisfactory performances (evalu-ated in Fasciani, 2014;Fasciani & Wyse, 2015). We measured quantitative metrics such as the percentage of retrievable synthesis parameter permutations, parameter continuity, and tim-bre space losses over an extensive set of syn-thesizers.…”
Section: Interactive and Optimized Timbre Space Computationmentioning
confidence: 99%
“…To address this problem, control strategies that map the user input onto synthesis varia-bles through timbre spaces or perceptually related layers, reviewed in Section 2, have re-cently proliferated. Along this line, in our pre-vious works (Fasciani, 2014;Fasciani & Wyse, 2012, 2015 we introduced a control method, implemented in an open--source software, which concurrently address the high dimen-sionality of synthesis control spaces and the lack of relationship between variation of con-trol parameters and the timbre of the generat-ed sound. In addition, our work introduced an unsupervised and automated generative map-ping, independent of the synthesis algorithm and implementation, which does not require users to provide training data.…”
Section: Introductionmentioning
confidence: 99%
“…and through sound, as well as to design artificial soundscapes through sound synthesis. Indeed, the human non-speech voice is increasingly being used to query large audio databases [105], to sketch new sonic concepts [91], and to control sound synthesis for performative purpose [106]. The increasing awareness among designers, artists and scientists that the human voice is an embodied tool for sketching with sound will lead to further studies and applications of the voice for sonic interaction design.…”
Section: Accepted Manuscriptmentioning
confidence: 99%
“…With the rapid development of information technology, information fusion technology has become an important tool for innovative teaching in the field of education. In college vocal music teaching, the use of information fusion technology can improve the quality and efficiency of teaching and promote the overall development of students [1][2][3].…”
Section: Introductionmentioning
confidence: 99%