2020
DOI: 10.3390/app10186627
|View full text |Cite
|
Sign up to set email alerts
|

BassNet: A Variational Gated Autoencoder for Conditional Generation of Bass Guitar Tracks with Learned Interactive Control

Abstract: Deep learning has given AI-based methods for music creation a boost by over the past years. An important challenge in this field is to balance user control and autonomy in music generation systems. In this work, we present BassNet, a deep learning model for generating bass guitar tracks based on musical source material. An innovative aspect of our work is that the model is trained to learn a temporally stable two-dimensional latent space variable that offers interactive user control. We empirically show that t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 44 publications
0
6
0
1
Order By: Relevance
“…BassNet, LeadNet: Tools for creating bass tracks (BassNet) or lead tracks (LeadNet), conditioned on one or more existing audio tracks (Grachten et al, 2020). The output adapts to the tonality of the existing tracks (if the input is tonal), and users can explore different rhythmic and melodic variations of the output by traversing a latent space.…”
Section: Tools Usedmentioning
confidence: 99%
See 1 more Smart Citation
“…BassNet, LeadNet: Tools for creating bass tracks (BassNet) or lead tracks (LeadNet), conditioned on one or more existing audio tracks (Grachten et al, 2020). The output adapts to the tonality of the existing tracks (if the input is tonal), and users can explore different rhythmic and melodic variations of the output by traversing a latent space.…”
Section: Tools Usedmentioning
confidence: 99%
“…The priming input can be used in diferent ways: it can serve as the start of a musical part to be continued by the tool, as the starting template from which variations can be explored, or as a part in a multi-part setting for which the tool generates accompanying parts. Priming amounts to what is referred to as dense conditioning (Grachten et al, 2020), where the output of a model is controlled by providing a rich source of information (e.g. an audio or MIDI track) instead of sparser types of information that are provided by the typical UI elements of a control panel (sliders, buttons, presets, etc.).…”
Section: Push and Pull Interactionsmentioning
confidence: 99%
“…K. Komatasu et al [35] system is another example of a bass-line generation model through genetic programming. BassNet [36] is a recently developed Bass Guitar tracks generator that utilize deep learning. This system is a conditional generator based on a given input musical pieces rather than generating from scratch.…”
Section: Bass Accompanimentmentioning
confidence: 99%
“…Several papers in this special issue deal with music-related tasks such as the generation of bass scores [7], chord progressions [8], and the analysis of rhythmic patterns [9].…”
Section: Deep Learning For Applications In Acousticsmentioning
confidence: 99%
“…Grachten et al [7] present a variational gated auto-encoder model for conditional generation of bass notes. They focus on temporally stable and controllable sequence generation, which is critical in human-collaborative music creation.…”
Section: Deep Learning For Applications In Acousticsmentioning
confidence: 99%