2018
DOI: 10.1609/aaai.v32i1.11312
|View full text |Cite
|
Sign up to set email alerts
|

MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment

Abstract: Generating music has a few notable differences from generating images and videos. First, music is an art of time, necessitating a temporal model. Second, music is usually composed of multiple instruments/tracks with their own temporal dynamics, but collectively they unfold over time interdependently. Lastly, musical notes are often grouped into chords, arpeggios or melodies in polyphonic music, and thereby introducing a chronological ordering of notes is not naturally suitable. In this paper, we propose three … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
104
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 281 publications
(106 citation statements)
references
References 15 publications
0
104
0
2
Order By: Relevance
“…Yoon et al [23] also focus on the temporal dynamics of generated time series: By using a supervised loss, the proposed TimeGAN can better capture temporal dynamics. In addition to temporal structures, MuseGAN by Dong et al [6] considers the interplay of different instruments. For this purpose, they integrate multiple generators that focus on different music characteristics.…”
Section: Related Workmentioning
confidence: 99%
“…Yoon et al [23] also focus on the temporal dynamics of generated time series: By using a supervised loss, the proposed TimeGAN can better capture temporal dynamics. In addition to temporal structures, MuseGAN by Dong et al [6] considers the interplay of different instruments. For this purpose, they integrate multiple generators that focus on different music characteristics.…”
Section: Related Workmentioning
confidence: 99%
“…This method has checked the performance and analyzed it through the computational method. Dong et al discuss multi-track sequential generative adversarial networks for symbolic music generation and accompaniment called Musegan [ 12 ]. This work uses the generative adversarial networks (GANs) framework for music generation with the help of three models.…”
Section: Related Workmentioning
confidence: 99%
“…In 2018, Dong. H et al proposed the MuseGAN network, which combined the convolutional neural network and adversarial generation network to generate multi-track music in the MIDI format [13]. In 2019, Zhang.…”
Section: Related Workmentioning
confidence: 99%