2021
DOI: 10.1007/978-3-030-86383-8_6
|View full text |Cite
|
Sign up to set email alerts
|

Canary Song Decoder: Transduction and Implicit Segmentation with ESNs and LTSMs

Abstract: Domestic canaries produce complex vocal patterns embedded in various levels of abstraction. Studying such temporal organization is of particular relevance to understand how animal brains represent and process vocal inputs such as language. However, this requires a large amount of annotated data. We propose a fast and easy-to-train transducer model based on RNN architectures to automate parts of the annotation process. This is similar to a speech recognition task. We demonstrate that RNN architectures can be ef… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…Although existing since the beginning of the 2000s, RC techniques are less well-known compared to other RNN-based Deep Learning architectures like Long Short-Term Memory networks (LSTMs). In the meantime, they have been successfully applied to various tasks and problems (some are listed in this review by [25]) and even demonstrates state of the art performances for tasks such as chaotic timeseries forecasting [30] or sound processing [27]. It was shown that ESNs needed less data than LSTMs to obtain good performances while being trained in much less time (e.g.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Although existing since the beginning of the 2000s, RC techniques are less well-known compared to other RNN-based Deep Learning architectures like Long Short-Term Memory networks (LSTMs). In the meantime, they have been successfully applied to various tasks and problems (some are listed in this review by [25]) and even demonstrates state of the art performances for tasks such as chaotic timeseries forecasting [30] or sound processing [27]. It was shown that ESNs needed less data than LSTMs to obtain good performances while being trained in much less time (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…It was shown that ESNs needed less data than LSTMs to obtain good performances while being trained in much less time (e.g. see [27]).…”
Section: Introductionmentioning
confidence: 99%