ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019
DOI: 10.1109/icassp.2019.8682805
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning for Tube Amplifier Emulation

Abstract: Analog audio effects and synthesizers often owe their distinct sound to circuit nonlinearities. Faithfully modeling such significant aspect of the original sound in virtual analog software can prove challenging. The current work proposes a generic data-driven approach to virtual analog modeling and applies it to the Fender Bassman 56F-A vacuum-tube amplifier. Specifically, a feedforward variant of the WaveNet deep neural network is trained to carry out a regression on audio waveform samples from input to outpu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
47
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 31 publications
(47 citation statements)
references
References 15 publications
(15 reference statements)
0
47
0
Order By: Relevance
“…Output signal In previous work [23], the outputs of the convolutional layers were fed to a three layer post-processing module with 1 × 1 convolutions and nonlinear activation functions. In convolutional neural network terminology, a 1 × 1 convolution refers to a matrix multiplication applied at each time step in the signal.…”
Section: Conditioning (User Controls)mentioning
confidence: 99%
See 2 more Smart Citations
“…Output signal In previous work [23], the outputs of the convolutional layers were fed to a three layer post-processing module with 1 × 1 convolutions and nonlinear activation functions. In convolutional neural network terminology, a 1 × 1 convolution refers to a matrix multiplication applied at each time step in the signal.…”
Section: Conditioning (User Controls)mentioning
confidence: 99%
“…This paper focuses on a model we proposed [23] for nonlinear cicuit black-box modelling, that was based on the WaveNet convolutional neural network [24]. The neural network model is made up of a series of convolutional layers, with each layer consisting of a dilated filter followed by a dynamically gated nonlinear activation function.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep learning architectures for black-box modeling of audio effects have been researched lately for linear effects such as equalization [18]; nonlinear memoryless effects such as tube amplifiers [19,20,21]; nonlinear effects with temporal dependencies such as compressors [22]; and linear and nonlinear time-varying effects such as flanging or ring modulation [23]. Deep learning for dereverberation has become a heavily researched field [24,25], although applying artificial reverberation or modeling plate and spring reverb with deep neural networks (DNN) has not been explored yet.…”
Section: Introductionmentioning
confidence: 99%
“…Deep learning has demonstrated great utility at such diverse audio signal processing tasks as classification, 13,14 onset detection, 15 source separation, 16 event detection, 17 dereverberation, 18,19 denoising, 20 formant estimation, 21 remixing, 22 and synthesis, [23][24][25][26] as well as dynamic range compression to automate the mastering process. 27 In the area of audio component modeling, deep learning has been used to model tube amplifiers 28 and most recently guitar distortion pedals. 29 Besides creating specific effects, efforts have been underway to explore how varied are the types of effects which can be learned from a single model, 30 to which this paper comprises a contribution.…”
Section: Introductionmentioning
confidence: 99%