2018 Conference on Cognitive Computational Neuroscience 2018
DOI: 10.32470/ccn.2018.1147-0
|View full text |Cite
|
Sign up to set email alerts
|

Learned context dependent categorical perception in a songbird

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
2
2

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 0 publications
0
6
0
Order By: Relevance
“…The utility of non-linear dimensionality reduction techniques are just now coming to fruition in the study of animal communication, for example using t-distributed stochastic neighborhood embedding (t-SNE; [32]) to describe the development of zebra finch song [34], using Uniform Manifold Approximation and Projection (UMAP; [31]) to describe and infer categories in birdsong [3,35], or using deep neural networks to synthesize naturalistic acoustic stimuli [36,37]. Developments in non-linear representation learning have helped fuel the most recent advancements in machine learning, untangling statistical relationships in ways that provide more explanatory power over data than traditional linear techniques [13,14].…”
Section: Latent Models Of Acoustic Communicationmentioning
confidence: 99%
See 2 more Smart Citations
“…The utility of non-linear dimensionality reduction techniques are just now coming to fruition in the study of animal communication, for example using t-distributed stochastic neighborhood embedding (t-SNE; [32]) to describe the development of zebra finch song [34], using Uniform Manifold Approximation and Projection (UMAP; [31]) to describe and infer categories in birdsong [3,35], or using deep neural networks to synthesize naturalistic acoustic stimuli [36,37]. Developments in non-linear representation learning have helped fuel the most recent advancements in machine learning, untangling statistical relationships in ways that provide more explanatory power over data than traditional linear techniques [13,14].…”
Section: Latent Models Of Acoustic Communicationmentioning
confidence: 99%
“…For example, the latent spaces of some neural networks linearize the presence of a beard in an image of a face without being trained on beards in any explicit way [15,44]. Complex features of vocalizations are similarly captured in intuitive ways in latent projections [3,[35][36][37]. Depending on the organization of the dataset projected into a latent space, these features can extend over biologically or psychologically relevant scales.…”
Section: Discrete Latent Projections Of Animal Vocalizationsmentioning
confidence: 99%
See 1 more Smart Citation
“…This data-driven approach is closely related to previous studies that have applied autoencoding to birdsong for purposes of generating spectrograms and interpolating syllables for use in playback experiments [38, 44]. Additionally, dimensionality reduction algorithms such as the UMAP [29] and t-SNE [27] algorithms we use here to visualize latent spaces have previously been applied to raw spectrograms of birdsong syllables to aid in syllable clustering [37] and to visualize juvenile song learning [25].…”
Section: Discussionmentioning
confidence: 95%
“…These neural-network-based algorithms can be used to sample directly from the learned representational spaces described in Section 3. A simple example is autoencoder-based synthesis ( Figure 6 ) (Sainburg et al, 2018a ; Zuidema et al, 2020 ). Autoencoders can be trained on spectral representations of vocal data, and systematically sampled in the learned latent space to produce new vocalizations.…”
Section: Synthesizing Vocalizationsmentioning
confidence: 99%