2020
DOI: 10.48550/arxiv.2011.10233
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

One Shot Learning for Speech Separation

Abstract: Despite the recent success of speech separation models, they fail to separate sources properly while facing different sets of people or noisy environments. To tackle this problem, we proposed to apply meta-learning to the speech separation task. We aimed to find a meta-initialization model, which can quickly adapt to new speakers by seeing only one mixture generated by those people. In this paper, we use model-agnostic meta-learning(MAML) algorithm and almost no inner loop(ANIL) algorithm in Conv-TasNet to ach… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 24 publications
0
2
0
Order By: Relevance
“…described in (Wu et al, 2020). We select at most 12 speakers for each accent and generate speech mixtures for each pair of speakers with the same accents.…”
Section: Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…described in (Wu et al, 2020). We select at most 12 speakers for each accent and generate speech mixtures for each pair of speakers with the same accents.…”
Section: Datasetmentioning
confidence: 99%
“…Nonetheless, there is not much work that applied meta-learning on the speech separation task. In our previous work, (Wu et al, 2020), we first proposed to solve the speech separation problem with meta-learning. Their setting is viewing utterance mixtures of two different speakers as a meta task.…”
Section: Introductionmentioning
confidence: 99%