Interspeech 2020 2020
DOI: 10.21437/interspeech.2020-2944
|View full text |Cite
|
Sign up to set email alerts
|

Improving On-Device Speaker Verification Using Federated Learning with Privacy

Abstract: Information on speaker characteristics can be useful as side information in improving speaker recognition accuracy. However, such information is often private. This paper investigates how privacy-preserving learning can improve a speaker verification system, by enabling the use of privacy-sensitive speaker data to train an auxiliary classification model that predicts vocal characteristics of speakers. In particular, this paper explores the utility achieved by approaches which combine different federated learni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
14
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 40 publications
(14 citation statements)
references
References 23 publications
0
14
0
Order By: Relevance
“…In local DP [Dwork et al, 2014], the statistics are obfuscated before leaving the device. However, models trained with local DP often suffer from low utility [Granqvist et al, 2020]. Instead, this paper will assume ( , δ)-central DP.…”
Section: Private Federated Learningmentioning
confidence: 99%
See 3 more Smart Citations
“…In local DP [Dwork et al, 2014], the statistics are obfuscated before leaving the device. However, models trained with local DP often suffer from low utility [Granqvist et al, 2020]. Instead, this paper will assume ( , δ)-central DP.…”
Section: Private Federated Learningmentioning
confidence: 99%
“…Finally, the resulting algorithm, termed Fair PFL or FPFL and described in Algorithm 1, inspired by the ideas from [McMahan et al, 2018b;Truex et al, 2019;Granqvist et al, 2020;Bonawitz et al, 2017] guarantees the users' privacy as follows:…”
Section: Extending the Algorithm To Private Federated Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…In FL, participating clients collaboratively learn a shared model under the supervision of a central server: each communication round often starts with the server broadcasting the global model to the participants; these participants then perform computations on their local data, and send their aggregated updates back to the server to update the global model [Kairouz et al, 2019]. While FL can be performed on a relatively small number of clients, many applications involve a large number of edge devices, such as mobile phones, or sensors [Ramaswamy et al, 2020;Sheller et al, 2020;Granqvist et al, 2020]. This setting is referred to as cross-device FL.…”
Section: Introductionmentioning
confidence: 99%