ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2019
DOI: 10.1109/icassp.2019.8682598
|View full text |Cite
|
Sign up to set email alerts
|

Audio Texture Synthesis with Random Neural Networks: Improving Diversity and Quality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 8 publications
0
6
0
Order By: Relevance
“…Fig. 8 presents the results of sound texture synthesis using c-cgCNN, McDermott's model [14] and Antognini's model [29] in waveforms. Unlike other two methods which act on frequency domain, c-cgCNN only uses raw audios.…”
Section: F Results On Texture Synthesismentioning
confidence: 99%
See 3 more Smart Citations
“…Fig. 8 presents the results of sound texture synthesis using c-cgCNN, McDermott's model [14] and Antognini's model [29] in waveforms. Unlike other two methods which act on frequency domain, c-cgCNN only uses raw audios.…”
Section: F Results On Texture Synthesismentioning
confidence: 99%
“…As for sound texture synthesis, classic models [14] are generally based on wavelet framework and use handcrafted filters to extract temporal statistics. Recently, Antognini et al [29] extended Gatys' method to sound texture synthesis by applying a random network to the spectrograms of sound textures. In contrast, our model learns the network adaptively instead of fixing it to random weights, and our model is applied to raw waveforms directly.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The features passed from one layer to another are called as deep features. In [42] and [43] deep features are extracted and analyzed from a convolution neural network (CNN).…”
Section: Deep Features For Texture Analysismentioning
confidence: 99%