2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2014
DOI: 10.1109/icassp.2014.6854952
|View full text |Cite
|
Sign up to set email alerts
|

From music audio to chord tablature: Teaching deep convolutional networks toplay guitar

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 20 publications
(9 citation statements)
references
References 7 publications
0
9
0
Order By: Relevance
“…Boulanger-Lewandowski et al [11] train a recurrent neural network to produce chord classifications using input of PCA-whitened magnitude DFT. In a similar direction, Humphrey and Bello [32] build a DNN that maps input spectrogram features to guitar-specific fingerings of chords.…”
Section: A Overviewmentioning
confidence: 99%
“…Boulanger-Lewandowski et al [11] train a recurrent neural network to produce chord classifications using input of PCA-whitened magnitude DFT. In a similar direction, Humphrey and Bello [32] build a DNN that maps input spectrogram features to guitar-specific fingerings of chords.…”
Section: A Overviewmentioning
confidence: 99%
“…The data set included examples from the string (bowed and plucked), woodwind (single, double, and air reed), and brass families. Eric J. Henry et al [7] proposed a model that can yield representations for the chords that require minimal prior knowledge to interpret. The model has been developed to address both challenges by modeling the physical constraints of a guitar to produce human-readable representations of music audio, i.e.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Early works relied on MFCC input to reduce computation [62], [63] for genre classification. Many works have been then introduced based on time-frequency representations e.g., CQT for chord recognition [45], guitar chord recognition [46], genre classification [116], transcription [96], melspectrogram for boundary detection [89], onset detection [90], hit song prediction [118], similarity learning [68], instrument recognition [39], music tagging [26], [15], [17], [59], and STFT for boundary detection [36], vocal separation [100], and vocal detection [88]. One-dimensional CNN for raw audio input is used for music tagging [26], [60], synthesising singing voice [9], polyphonic music [112], and instruments [29].…”
Section: Convolutional Layers and Musicmentioning
confidence: 99%
“…For convnets, the depth is increasing in MIR as well as other domains. For example, networks for music tagging, boundary detection, and chord recognition in 2014 used 2layer convnet [26], [90], [46], but recent research often uses 5 or more convolutional layers [15,68,52].…”
Section: Depth Of Networkmentioning
confidence: 99%