2021 IEEE Spoken Language Technology Workshop (SLT) 2021
DOI: 10.1109/slt48900.2021.9383535
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight Voice Anonymization Based on Data-Driven Optimization of Cascaded Voice Modification Modules

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 12 publications
0
10
0
Order By: Relevance
“…For ASV, we train the xvector model [19] with an AMSoftmax loss [20]. Following prior work [21,3,4], we report the equal error rate (EER) as our privacy metric (high EER implies good privacy-preserving representations). IEMOCAP (ER).…”
Section: Tasks Datasets and Modelsmentioning
confidence: 99%
See 3 more Smart Citations
“…For ASV, we train the xvector model [19] with an AMSoftmax loss [20]. Following prior work [21,3,4], we report the equal error rate (EER) as our privacy metric (high EER implies good privacy-preserving representations). IEMOCAP (ER).…”
Section: Tasks Datasets and Modelsmentioning
confidence: 99%
“…Existing literature on local speech anonymization generally follows two lines: voice conversion to change the speaker embeddings while retaining the content (i.e., transforming the source voice to a target voice or target speaker) [25,3,26,27] or voice modification (i.e., apply different signal processing techniques directly to the source voice to get a perturbed output voice) [4,28,5]. In this study, we select two voice conversion methods, namely the Cycle-GAN [29] and the Assem-VC [30], as the baselines for the first line of anonymization approach.…”
Section: Baseline Modelsmentioning
confidence: 99%
See 2 more Smart Citations
“…The privacy is often achieved at the expense of utility, and an important question is how to set up a proper threshold between privacy and utility (Li & Li, 2009). When developing anonymization methods, a joint optimization of utility gain and privacy loss can be performed by incorporating them into the criterion for training anonymization models (Kai et al, 2021).…”
Section: Open Questions and Future Directionsmentioning
confidence: 99%