2018
DOI: 10.48550/arxiv.1809.04281
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Music Transformer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
114
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 67 publications
(115 citation statements)
references
References 0 publications
1
114
0
Order By: Relevance
“…A particularly successful class of such models are those built with the attention mechanism [5]. For example, the Transformer is an attention-based architecture that has recently produced state of the art performance in natural language processing [2], computer vision [6,7], and audio signal analysis [8,9].…”
Section: Introductionmentioning
confidence: 99%
“…A particularly successful class of such models are those built with the attention mechanism [5]. For example, the Transformer is an attention-based architecture that has recently produced state of the art performance in natural language processing [2], computer vision [6,7], and audio signal analysis [8,9].…”
Section: Introductionmentioning
confidence: 99%
“…The unique beat-bar hierarchical structure and long-range repetition of music suggest that the Transformerbased model might lead to better performance in music modeling. Transformer-based methods have recently been applied to music problems [34]- [37]. Huang et al [34] proposed Music Transformer to generate music and were the first to apply Transformer to music generation.…”
Section: B Transformer-based Methodsmentioning
confidence: 99%
“…Transformer-based methods have recently been applied to music problems [34]- [37]. Huang et al [34] proposed Music Transformer to generate music and were the first to apply Transformer to music generation. They showed that a Transformer-based model can generate qualified music.…”
Section: B Transformer-based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Especially the Transformer architecture (Vaswani et al, 2017), popularized in the context of Natural Language processing (Brown et al, 2020) and then successfully ap-plied in several other Machine Learning tasks (Dosovitskiy et al, 2021;Lample & Charton, 2019;Biggio et al, 2021), has proven to be a powerful tool for musical sequence modelling. Initial breakthroughs by Huang et al (2018) and Payne (2019) applied language modelling techniques to symbolic music to achieve state-of-the-art music generation. These models featured limited controllability, if any, and subsequent work attempts to improve on this limitation through various avenues (Ens & Pasquier, 2020;Choi et al, 2020;Wu & Yang, 2021;Hadjeres & Crestel, 2020).…”
Section: Introductionmentioning
confidence: 99%