Nime 2021
DOI: 10.21428/92fbeb44.3d0e9e12
|View full text |Cite
|
Sign up to set email alerts
|

AI-terity 2.0: An Autonomous NIME Featuring GANSpaceSynth Deep Learning Model

Abstract: In this paper we present the recent developments in the AI-terity instrument. AI-terity is a deformable, non-rigid musical instrument that comprises a particular artificial intelligence (AI) method for generating audio samples for real-time audio synthesis. As an improvement, we developed the control interface structure with additional sensor hardware. In addition, we implemented a new hybrid deep learning architecture, GANSpaceSynth, in which we applied the GANSpace method on the GANSynth model.Following the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…Over the past decades, the concomitant advances in wearable sensing technologies, such as Inertial Measurement Unit (IMU) sensors, and sound synthesis techniques have paved the way to the development of motion-sound interactive systems. Designing the mapping between movement and sound is essential for interactive audio applications and extensively studied in the New Interfaces for Musical Expression (NIME) community, involving digital musical instrument design [25,35,49] possibly using machine learning [21,30,45].…”
Section: Motion-sound Interactive Systemsmentioning
confidence: 99%
“…Over the past decades, the concomitant advances in wearable sensing technologies, such as Inertial Measurement Unit (IMU) sensors, and sound synthesis techniques have paved the way to the development of motion-sound interactive systems. Designing the mapping between movement and sound is essential for interactive audio applications and extensively studied in the New Interfaces for Musical Expression (NIME) community, involving digital musical instrument design [25,35,49] possibly using machine learning [21,30,45].…”
Section: Motion-sound Interactive Systemsmentioning
confidence: 99%
“…Synthesizers can be very expressive instruments, whether controlled by the ubiquitous keyboard [28], by augmented instruments, or instrument-like interfaces [1,25,26], or by whole new sets of gestures enabled by novel controllers [8,35].…”
Section: Introductionmentioning
confidence: 99%