2022
DOI: 10.48550/arxiv.2202.10672
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Contrastive-mixup learning for improved speaker verification

Xin Zhang,
Minho Jin,
Roger Cheng
et al.

Abstract: This paper proposes a novel formulation of prototypical loss with mixup for speaker verification. Mixup is a simple yet efficient data augmentation technique that fabricates a weighted combination of random data point and label pairs for deep neural network training. Mixup has attracted increasing attention due to its ability to improve robustness and generalization of deep neural networks. Although mixup has shown success in diverse domains, most applications have centered around closed-set classification tas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 19 publications
(29 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?